mcunet and tinyengine
MCUNet is the algorithmic framework and research platform, while TinyEngine is its optimized inference runtime that implements and executes those memory-efficient neural networks on actual microcontrollers—they are complements used together.
About mcunet
mit-han-lab/mcunet
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
This project helps embedded systems engineers and IoT device developers deploy complex machine learning models, like those for image classification or person detection, onto tiny, low-power microcontrollers. It takes your trained deep learning models and optimizes them to run efficiently, delivering faster inference and significantly reduced memory usage on resource-constrained IoT devices. This allows smart features to run directly on devices without needing to connect to the cloud.
About tinyengine
mit-han-lab/tinyengine
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory
This tool helps embed sophisticated machine learning capabilities directly onto tiny, low-power microcontroller devices, enabling them to process data and make decisions independently. It takes a trained neural network model and optimizes it to run efficiently on hardware with very limited memory. The primary users are engineers and product developers creating smart IoT devices like wearable sensors, smart appliances, or industrial monitors.
Scores updated daily from GitHub, PyPI, and npm data. How scores work