SforAiDl/KD_Lib

A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.

52
/ 100
Established

This tool helps machine learning engineers and researchers make their trained neural networks smaller and faster, without significantly losing accuracy. You provide a large, high-performing 'teacher' model and a smaller 'student' model, along with your training data. The output is a more compact, efficient 'student' model that mimics the teacher's performance.

652 stars. No commits in the last 6 months. Available on PyPI.

Use this if you need to deploy complex deep learning models to environments with limited computational resources, such as mobile devices or edge hardware.

Not ideal if you are a business user without a deep understanding of machine learning model architectures and training processes.

model-optimization deep-learning-deployment edge-ai neural-network-compression machine-learning-research
Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 17 / 25

How are scores calculated?

Stars

652

Forks

61

Language

Python

License

MIT

Last pushed

Mar 01, 2023

Commits (30d)

0

Dependencies

26

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/SforAiDl/KD_Lib"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.