Awesome-Dataset-Distillation and Awesome-Knowledge-Distillation
These are ecosystem siblings within knowledge compression research: dataset distillation (A) and knowledge distillation (B) are related but distinct techniques that both compress machine learning models—one by reducing training data and the other by transferring learned representations between models.
About Awesome-Dataset-Distillation
Guang000/Awesome-Dataset-Distillation
A curated list of awesome papers on dataset distillation and related applications.
This project compiles a detailed list of research papers focused on 'dataset distillation'. It's a method for creating a much smaller, synthetic dataset that can be used to train AI models to perform almost as well as if they were trained on the original, much larger dataset. The primary users are machine learning researchers and practitioners who work with large datasets and need to reduce their size for efficiency or other applications.
About Awesome-Knowledge-Distillation
FLHonker/Awesome-Knowledge-Distillation
Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。
This collection helps machine learning practitioners find relevant research papers on 'knowledge distillation' — a technique to transfer knowledge from large, complex models to smaller, more efficient ones. It takes a research problem or interest in model optimization as input and provides an organized list of academic papers covering different methods and applications of knowledge distillation. Data scientists and ML engineers who need to deploy performant yet lightweight models would find this valuable.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work