Tebmer/Awesome-Knowledge-Distillation-of-LLMs
This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vertical Distillation of LLMs.
This collection of research papers helps AI practitioners who want to make smaller language models smarter or more efficient. It offers guidance on how to transfer advanced capabilities from large, proprietary models (like GPT-4) to smaller, open-source models (like LLaMA) or how to make open-source models improve themselves. This resource is for AI researchers, machine learning engineers, and data scientists working with language models.
1,264 stars. No commits in the last 6 months.
Use this if you need to compress large language models, improve the performance of smaller open-source models, or imbue specific skills into a model for specialized tasks.
Not ideal if you are looking for ready-to-use software or an implementation guide rather than a research compendium on techniques.
Stars
1,264
Forks
71
Language
—
License
—
Category
Last pushed
Mar 09, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Tebmer/Awesome-Knowledge-Distillation-of-LLMs"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scaleapi/llm-engine
Scale LLM Engine public repository
AGI-Arena/MARS
The official implementation of MARS: Unleashing the Power of Variance Reduction for Training Large Models
modelscope/easydistill
a toolkit on knowledge distillation for large language models
AGI-Edgerunners/LLM-Adapters
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient...
Wang-ML-Lab/bayesian-peft
Bayesian Low-Rank Adaptation of LLMs: BLoB [NeurIPS 2024] and TFB [NeurIPS 2025]