DefangChen/SimKD

[CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".

35
/ 100
Emerging

When building image classification models, this toolbox helps machine learning engineers improve the performance of smaller, more efficient 'student' models by transferring knowledge from larger, more accurate 'teacher' models. It takes a pre-trained teacher model and training data (like CIFAR-100 or ImageNet) as input, and outputs an optimized student model that performs better on classification tasks than if it were trained alone. This is ideal for those working on deploying efficient computer vision models.

102 stars. No commits in the last 6 months.

Use this if you need to create compact and efficient image classification models that maintain high accuracy, leveraging state-of-the-art knowledge distillation techniques.

Not ideal if you are working with non-image data types or need to develop models from scratch without relying on a teacher model for knowledge transfer.

image-classification model-optimization deep-learning computer-vision model-deployment
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 18 / 25

How are scores calculated?

Stars

102

Forks

19

Language

Python

License

Last pushed

Jun 16, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/DefangChen/SimKD"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.