autonlab/aqua

AQuA: A Benchmarking Tool for Label Quality Assessment, NeurIPS'23 D&B

26
/ 100
Experimental

This tool helps machine learning engineers and researchers assess the quality of labels in their datasets. You provide your dataset, and it evaluates different label error detection methods, showing you how well each method identifies mislabeled data. This helps you choose the best strategy to improve your dataset's quality before training your ML models.

No commits in the last 6 months.

Use this if you need to objectively compare and select the most effective methods for identifying and correcting errors in your machine learning dataset labels across various data types.

Not ideal if you're looking for a simple, automated 'fix-all' solution for label errors without wanting to compare different detection methods.

data-quality-assessment machine-learning-engineering data-labeling model-training computer-vision
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 4 / 25

How are scores calculated?

Stars

23

Forks

1

Language

Jupyter Notebook

License

MIT

Last pushed

Oct 17, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/autonlab/aqua"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.