dilyabareeva/quanda

A toolkit for quantitative evaluation of data attribution methods.

35
/ 100
Emerging

This toolkit helps machine learning practitioners and researchers quantitatively assess how well different data attribution methods explain a model's predictions. You input your trained PyTorch model, its training data, and the attribution method you're interested in, and it outputs a detailed evaluation of that method's performance. It's designed for data scientists and ML researchers who need to understand and validate why their models make certain decisions.

No commits in the last 6 months. Available on PyPI.

Use this if you need to systematically compare and evaluate various training data attribution techniques for your PyTorch models to ensure they provide reliable and insightful explanations.

Not ideal if you are looking for a tool to generate data attributions themselves rather than evaluate existing methods, or if you are not working with PyTorch models.

ML interpretability model debugging data quality assessment explainable AI data influence
Stale 6m
Maintenance 2 / 25
Adoption 8 / 25
Maturity 25 / 25
Community 0 / 25

How are scores calculated?

Stars

57

Forks

Language

Jupyter Notebook

License

MIT

Last pushed

Jul 14, 2025

Commits (30d)

0

Dependencies

10

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/dilyabareeva/quanda"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.