IPL-sharif/KD_Survey

A Comprehensive Survey on Knowledge Distillation

30
/ 100
Emerging

This resource provides a comprehensive overview of Knowledge Distillation (KD), a technique used to make large, complex AI models (like LLMs or Vision-Language Models) run efficiently on devices with limited computing power, such as edge devices. It takes in various KD methods and categorizes them by their sources, schemes, algorithms, and applications across different data types (like text, speech, 3D input) to output a structured understanding of this field. Data scientists, machine learning engineers, and AI researchers working with large neural networks would use this to understand and apply KD effectively.

Use this if you need to deploy large, high-performing AI models onto resource-constrained devices and are looking for techniques to reduce their runtime and memory footprint without significant performance loss.

Not ideal if you are new to deep learning or neural networks, as it assumes familiarity with advanced AI concepts and model optimization strategies.

AI deployment model optimization deep learning edge computing large language models
No License No Package No Dependents
Maintenance 6 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

63

Forks

4

Language

License

Last pushed

Dec 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/IPL-sharif/KD_Survey"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.