megvii-research/mdistiller

The official implementation of [CVPR2022] Decoupled Knowledge Distillation https://arxiv.org/abs/2203.08679 and [ICCV2023] DOT: A Distillation-Oriented Trainer https://openaccess.thecvf.com/content/ICCV2023/papers/Zhao_DOT_A_Distillation-Oriented_Trainer_ICCV_2023_paper.pdf

40
/ 100
Emerging

This is a tool for machine learning practitioners to make their large, accurate computer vision models (teachers) teach smaller, faster models (students) to perform nearly as well. You provide your existing teacher model and a student model architecture, and it outputs a highly optimized student model. It's designed for machine learning engineers and researchers working on deploying efficient computer vision solutions.

894 stars. No commits in the last 6 months.

Use this if you need to deploy high-performing computer vision models on resource-constrained devices or in latency-sensitive applications.

Not ideal if you are looking for an off-the-shelf, no-code solution for general image classification or object detection, as it requires expertise in model training and architecture.

model-optimization computer-vision deep-learning-deployment image-classification object-detection
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 22 / 25

How are scores calculated?

Stars

894

Forks

132

Language

Python

License

Last pushed

Nov 05, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/megvii-research/mdistiller"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.