ZuchniakK/MTKD

Multi-Teacher Knowledge Distillation, code for my PhD dissertation. I used knowledge distillation as a decision-fusion and compressing mechanism for ensemble models.

33
/ 100
Emerging

This project helps machine learning engineers and researchers deploy powerful deep learning models into real-time applications, especially on devices with limited resources. It takes multiple trained deep learning models (teachers) and compresses their collective knowledge into a single, smaller model (student) that maintains high accuracy but is far more efficient. This allows complex AI solutions for tasks like automated corrosion detection or wildfire smoke detection to run effectively on edge devices.

No commits in the last 6 months.

Use this if you need to deploy accurate deep learning models on devices with limited computational power or storage, without significantly sacrificing performance.

Not ideal if your existing single model is already lightweight enough for your deployment environment or if you have ample computational resources for large model ensembles.

edge-ai model-compression real-time-inference computer-vision resource-constrained-devices
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

27

Forks

3

Language

Jupyter Notebook

License

MIT

Last pushed

May 19, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ZuchniakK/MTKD"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.