dkozlov/awesome-knowledge-distillation

Awesome Knowledge Distillation

57
/ 100
Established

This is a curated collection of research papers focused on 'knowledge distillation,' a technique in machine learning. It helps practitioners who need to make large, complex machine learning models more efficient for real-world deployment. You'll find papers detailing how to compress high-performing but resource-intensive models into smaller, faster ones, while retaining much of their accuracy. This resource is for machine learning engineers, data scientists, or researchers who are optimizing model performance for deployment in resource-constrained environments.

3,825 stars. Actively maintained with 1 commit in the last 30 days.

Use this if you need to research methods for compressing large, accurate machine learning models into smaller, more efficient versions for faster inference or deployment on edge devices.

Not ideal if you are looking for ready-to-use code, tutorials, or a direct implementation of knowledge distillation algorithms.

model-optimization machine-learning-deployment deep-learning-efficiency AI-model-compression resource-constrained-AI
No Package No Dependents
Maintenance 9 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

3,825

Forks

513

Language

License

Apache-2.0

Last pushed

Dec 25, 2025

Commits (30d)

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/dkozlov/awesome-knowledge-distillation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.