DefangChen/SemCKD

[AAAI-2021, TKDE-2023] Official implementation for "Cross-Layer Distillation with Semantic Calibration".

35
/ 100
Emerging

This project helps machine learning engineers create smaller, more efficient 'student' neural networks that perform almost as well as larger, more complex 'teacher' networks. It takes a large, pre-trained teacher model and training data (like image datasets) as input, and outputs a compact student model that is easier to deploy. Machine learning researchers and practitioners who need to optimize model size and inference speed for deployment would find this valuable.

No commits in the last 6 months.

Use this if you need to reduce the computational cost and memory footprint of a neural network model without significantly sacrificing its performance, especially for tasks like image classification.

Not ideal if your primary goal is to train a model from scratch without leveraging existing larger models, or if interpretability of the distillation process is critical over performance gains.

model-optimization deep-learning-deployment computer-vision neural-network-compression model-efficiency
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 18 / 25

How are scores calculated?

Stars

78

Forks

15

Language

Jupyter Notebook

License

Last pushed

Jul 29, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/DefangChen/SemCKD"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.