HayeonLee/HELP

Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight)

35
/ 100
Emerging

When designing new neural network models, this project helps machine learning engineers predict how fast a model will run on different hardware, like mobile phones or specialized chips. It takes in details about a proposed neural network architecture and a specific hardware device, then quickly estimates the model's processing speed (latency) on that device. This allows engineers to choose efficient model designs tailored for their deployment hardware.

No commits in the last 6 months.

Use this if you need to quickly estimate the latency of various neural network architectures on many different hardware devices with minimal actual testing.

Not ideal if you require exact, highly precise latency measurements that only real-world device testing can provide, or if your focus is solely on model accuracy rather than hardware efficiency.

neural-architecture-search edge-ai model-deployment hardware-optimization deep-learning-engineering
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

64

Forks

7

Language

Python

License

MIT

Last pushed

Aug 05, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/HayeonLee/HELP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.