Tebmer/Awesome-Knowledge-Distillation-of-LLMs

This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vertical Distillation of LLMs.

34
/ 100
Emerging

This collection of research papers helps AI practitioners who want to make smaller language models smarter or more efficient. It offers guidance on how to transfer advanced capabilities from large, proprietary models (like GPT-4) to smaller, open-source models (like LLaMA) or how to make open-source models improve themselves. This resource is for AI researchers, machine learning engineers, and data scientists working with language models.

1,264 stars. No commits in the last 6 months.

Use this if you need to compress large language models, improve the performance of smaller open-source models, or imbue specific skills into a model for specialized tasks.

Not ideal if you are looking for ready-to-use software or an implementation guide rather than a research compendium on techniques.

AI research machine learning engineering natural language processing model optimization AI model specialization
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

1,264

Forks

71

Language

License

Last pushed

Mar 09, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Tebmer/Awesome-Knowledge-Distillation-of-LLMs"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.