mit-han-lab/tinyengine

[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory

49
/ 100
Emerging

This tool helps embed sophisticated machine learning capabilities directly onto tiny, low-power microcontroller devices, enabling them to process data and make decisions independently. It takes a trained neural network model and optimizes it to run efficiently on hardware with very limited memory. The primary users are engineers and product developers creating smart IoT devices like wearable sensors, smart appliances, or industrial monitors.

928 stars. No commits in the last 6 months.

Use this if you need to run deep learning models on resource-constrained microcontrollers for applications like object detection, voice recognition, or predictive maintenance, without needing a constant cloud connection.

Not ideal if your application runs on devices with ample memory and processing power, such as smartphones, PCs, or cloud servers, where larger models and more conventional frameworks are suitable.

edge-ai embedded-systems iot-development tiny-machine-learning on-device-inference
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

928

Forks

155

Language

C

License

MIT

Last pushed

Nov 27, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mit-han-lab/tinyengine"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.