alibaba/TinyNeuralNetwork
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
This framework helps AI developers make deep learning models smaller and run faster on resource-constrained devices like smart speakers, TVs, or facial recognition systems. It takes an existing PyTorch model and outputs a compressed version that uses less memory and computational power, suitable for deployment on millions of IoT devices. AI engineers and machine learning practitioners focused on edge device deployment would use this.
873 stars.
Use this if you need to deploy large deep learning models on IoT devices or embedded systems where computational resources and memory are limited.
Not ideal if you are solely focused on cloud-based AI applications or do not require model size and speed optimizations for edge deployment.
Stars
873
Forks
131
Language
Python
License
MIT
Category
Last pushed
Mar 03, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/alibaba/TinyNeuralNetwork"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
fastmachinelearning/hls4ml
Machine learning on FPGAs using HLS
KULeuven-MICAS/zigzag
HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators
fastmachinelearning/hls4ml-tutorial
Tutorial notebooks for hls4ml
doonny/PipeCNN
An OpenCL-based FPGA Accelerator for Convolutional Neural Networks
es-ude/elastic-ai.creator
Design, train and generate neural networks optimized specifically for FPGAs.