HayeonLee/HELP
Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight)
When designing new neural network models, this project helps machine learning engineers predict how fast a model will run on different hardware, like mobile phones or specialized chips. It takes in details about a proposed neural network architecture and a specific hardware device, then quickly estimates the model's processing speed (latency) on that device. This allows engineers to choose efficient model designs tailored for their deployment hardware.
No commits in the last 6 months.
Use this if you need to quickly estimate the latency of various neural network architectures on many different hardware devices with minimal actual testing.
Not ideal if you require exact, highly precise latency measurements that only real-world device testing can provide, or if your focus is solely on model accuracy rather than hardware efficiency.
Stars
64
Forks
7
Language
Python
License
MIT
Category
Last pushed
Aug 05, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/HayeonLee/HELP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
pykt-team/pykt-toolkit
pyKT: A Python Library to Benchmark Deep Learning based Knowledge Tracing Models
microsoft/archai
Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.
google-research/morph-net
Fast & Simple Resource-Constrained Learning of Deep Network Structure
IDEALLab/EngiBench
Benchmarks for automated engineering design
AI-team-UoA/pyJedAI
An open-source library that leverages Python’s data science ecosystem to build powerful...