Awesome-Dataset-Distillation and awesome-knowledge-distillation-for-object-detection
These are ecosystem siblings—one curates knowledge distillation papers broadly across domains while the other specializes in the object detection application subset, allowing practitioners to reference domain-general KD techniques through the first resource and then drill into detection-specific implementations via the second.
About Awesome-Dataset-Distillation
Guang000/Awesome-Dataset-Distillation
A curated list of awesome papers on dataset distillation and related applications.
This project compiles a detailed list of research papers focused on 'dataset distillation'. It's a method for creating a much smaller, synthetic dataset that can be used to train AI models to perform almost as well as if they were trained on the original, much larger dataset. The primary users are machine learning researchers and practitioners who work with large datasets and need to reduce their size for efficiency or other applications.
About awesome-knowledge-distillation-for-object-detection
LutingWang/awesome-knowledge-distillation-for-object-detection
A curated list of awesome knowledge distillation papers and codes for object detection.
This is a curated list of research papers and associated code for 'knowledge distillation' techniques in object detection. These techniques help make object detection models — which identify and locate objects within images or videos — more efficient and faster, especially for deployment on devices with limited computing power. The resource is for researchers, machine learning engineers, and computer vision scientists working to optimize and deploy object detection systems.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work