tianyic/only_train_once_personal_footprint

OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM

46
/ 100
Emerging

This project helps machine learning engineers and researchers optimize deep neural networks (DNNs) for deployment. It takes an existing DNN model, either untrained or pre-trained, and automatically produces a smaller, more efficient version without sacrificing performance. This is ideal for reducing the computational resources and memory needed for models in production.

310 stars. No commits in the last 6 months.

Use this if you need to make your deep learning models smaller and faster for deployment while maintaining their accuracy, without manually redesigning or fine-tuning them.

Not ideal if you are a beginner just starting with deep learning model training, as this tool is for optimizing existing models, not for initial model development.

deep-learning-optimization model-compression machine-learning-deployment neural-network-efficiency AI-resource-management
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

310

Forks

48

Language

Python

License

MIT

Last pushed

Sep 16, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tianyic/only_train_once_personal_footprint"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.