DefangChen/Knowledge-Distillation-Paper

This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).

35
/ 100
Emerging

This collection provides a curated list of research papers on knowledge distillation, a technique to compress large, complex deep learning models into smaller, more efficient ones without significant loss of performance. It takes academic papers as input and helps identify pioneering works, survey articles, and specific applications like accelerating diffusion models or improving segmentation. Machine learning researchers, practitioners, and students focused on model compression and efficiency would find this useful.

No commits in the last 6 months.

Use this if you need to quickly find relevant academic papers to understand, apply, or research knowledge distillation techniques in deep learning.

Not ideal if you are looking for code implementations, tutorials, or a high-level conceptual overview without diving into academic literature.

deep-learning model-compression machine-learning-research neural-networks artificial-intelligence
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 18 / 25

How are scores calculated?

Stars

84

Forks

16

Language

License

Last pushed

Mar 19, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/DefangChen/Knowledge-Distillation-Paper"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.