LLM Knowledge Distillation LLM Tools

Tools and frameworks for compressing large language models into smaller, efficient student models through knowledge distillation techniques. Includes distillation algorithms, teacher-student training pipelines, and methods for knowledge transfer. Does NOT include general model pruning, quantization, or fine-tuning without a teacher model.

There are 10 llm knowledge distillation tools tracked. The highest-rated is LLM-Tuning-Safety/LLMs-Finetuning-Safety at 42/100 with 344 stars.

Get all 10 projects as JSON

curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=llm-tools&subcategory=llm-knowledge-distillation&limit=20"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

# Tool Score Tier
1 LLM-Tuning-Safety/LLMs-Finetuning-Safety

We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10...

42
Emerging
2 kyegomez/Sophia

Effortless plugin and play Optimizer to cut model training costs by 50%. ...

39
Emerging
3 appier-research/robust-llm-finetunes

Accepted to NeurIPS 2025

35
Emerging
4 uthmandevsec/Self-Distillation

🤖 Enable continual learning by reproducing the On-Policy Self-Distillation...

31
Emerging
5 jmcentire/apprentice

Train cheap models on expensive ones. Automatically. With receipts.

27
Experimental
6 phonism/LLMNotes

LLM 学习笔记:Transformer 架构、强化学习 (RLHF/DPO/PPO)、分布式训练、推理优化。含完整数学推导与Slides。

27
Experimental
7 kyj93790/VILA

[COLM 2025] Improving Fisher Information Estimation and Efficiency for...

22
Experimental
8 Hong-Lab-UMN-ECE/RoSTE

[ICML 2025] Official code for the paper "RoSTE: An Efficient...

22
Experimental
9 2proveit/LLMCL-DeepSpeed

implementation of some classical CL methods using deepseed

12
Experimental
10 FareedKhan-dev/Improve-Weak-LLM-Using-SPIN-Technique

After RLHF and SFT show promising results, a new technique named SPIN is...

12
Experimental