mit-han-lab/mcunet
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
This project helps embedded systems engineers and IoT device developers deploy complex machine learning models, like those for image classification or person detection, onto tiny, low-power microcontrollers. It takes your trained deep learning models and optimizes them to run efficiently, delivering faster inference and significantly reduced memory usage on resource-constrained IoT devices. This allows smart features to run directly on devices without needing to connect to the cloud.
664 stars. No commits in the last 6 months. Available on PyPI.
Use this if you need to run sophisticated AI capabilities, such as real-time image analysis, directly on microcontrollers with extremely limited memory and processing power, making your IoT devices smarter and more autonomous.
Not ideal if your application runs on devices with ample memory and computational resources, such as smartphones, personal computers, or cloud servers, as its primary benefit is extreme resource optimization.
Stars
664
Forks
105
Language
Python
License
MIT
Category
Last pushed
Mar 29, 2024
Commits (30d)
0
Dependencies
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mit-han-lab/mcunet"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
pytorch/executorch
On-device AI across mobile, embedded and edge for PyTorch
catalyst-team/catalyst
Accelerated deep learning R&D
z-mahmud22/Dlib_Windows_Python3.x
Dlib compiled binaries (.whl) for Python 3.7-3.14 and Windows x64
gigwegbe/tinyml-papers-and-projects
This is a list of interesting papers and projects about TinyML.
ai-techsystems/deepC
vendor independent TinyML deep learning library, compiler and inference framework microcomputers...