MingSun-Tse/Efficient-Deep-Learning

Collection of recent methods on (deep) neural network compression and acceleration.

48
/ 100
Emerging

This project helps machine learning engineers and researchers optimize deep neural networks for deployment in resource-constrained environments like mobile phones or embedded devices. It provides a curated collection of techniques for making existing neural network models smaller and faster. You provide a trained deep neural network, and this project offers methods to reduce its size and computational requirements while maintaining accuracy.

954 stars. No commits in the last 6 months.

Use this if you need to deploy large deep learning models on hardware with limited memory, processing power, or battery life, and you want to reduce their footprint and increase inference speed.

Not ideal if you are looking for methods to initially design neural network architectures from scratch or optimize models for maximal accuracy without concern for computational efficiency.

deep-learning-deployment edge-ai model-optimization embedded-systems resource-constrained-ml
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

954

Forks

132

Language

License

MIT

Last pushed

Apr 04, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/MingSun-Tse/Efficient-Deep-Learning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.