HikariTJU/LD
Localization Distillation for Object Detection (CVPR 2022, TPAMI 2023)
Localization Distillation for Object Detection (LD) helps computer vision engineers and researchers improve the accuracy of their object detection models without increasing their complexity. It takes an existing, powerful "teacher" object detection model and a smaller "student" model, then transfers detailed localization knowledge from the teacher to the student. This results in a smaller, faster student model that can detect objects with greater precision.
388 stars. No commits in the last 6 months.
Use this if you need to deploy a highly accurate object detection model in resource-constrained environments where computational efficiency is crucial.
Not ideal if you are looking for a pre-trained, ready-to-use object detection model without needing to perform model distillation.
Stars
388
Forks
52
Language
Python
License
Apache-2.0
Category
Last pushed
Oct 24, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/HikariTJU/LD"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Guang000/Awesome-Dataset-Distillation
A curated list of awesome papers on dataset distillation and related applications.
dkozlov/awesome-knowledge-distillation
Awesome Knowledge Distillation
SforAiDl/KD_Lib
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of...
SakurajimaMaiii/ProtoKD
[ICASSP 2023] Prototype Knowledge Distillation for Medical Segmentation with Missing Modality
szq0214/FKD
Official code for our ECCV'22 paper "A Fast Knowledge Distillation Framework for Visual Recognition"