sail-sg/D-TRAK

Intriguing Properties of Data Attribution on Diffusion Models (ICLR 2024)

31
/ 100
Emerging

This project helps machine learning researchers and practitioners understand which specific training images most influence the outputs of a diffusion model. You input a trained diffusion model and a dataset, and it outputs visualizations showing the "proponent" (most supportive) and "opponent" (most unsupportive) training examples for any given generated image. This allows you to trace back and identify the data points that shaped a model's particular output.

No commits in the last 6 months.

Use this if you need to debug, explain, or improve the behavior of diffusion models by understanding the specific training data points that contribute to their generated outputs.

Not ideal if you are looking to attribute other types of machine learning models (e.g., discriminative classifiers) or if you are not working with diffusion models.

diffusion models explainable AI model interpretability image generation machine learning research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

37

Forks

3

Language

Jupyter Notebook

License

MIT

Last pushed

Jan 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/sail-sg/D-TRAK"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.