tianyic/only_train_once_personal_footprint
OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM
This project helps machine learning engineers and researchers optimize deep neural networks (DNNs) for deployment. It takes an existing DNN model, either untrained or pre-trained, and automatically produces a smaller, more efficient version without sacrificing performance. This is ideal for reducing the computational resources and memory needed for models in production.
310 stars. No commits in the last 6 months.
Use this if you need to make your deep learning models smaller and faster for deployment while maintaining their accuracy, without manually redesigning or fine-tuning them.
Not ideal if you are a beginner just starting with deep learning model training, as this tool is for optimizing existing models, not for initial model development.
Stars
310
Forks
48
Language
Python
License
MIT
Category
Last pushed
Sep 16, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tianyic/only_train_once_personal_footprint"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmengine
OpenMMLab Foundational Library for Training Deep Learning Models
Xilinx/brevitas
Brevitas: neural network quantization in PyTorch
fastmachinelearning/qonnx
QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX
google/qkeras
QKeras: a quantization deep learning library for Tensorflow Keras
tensorflow/model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization...