sumanthprabhu/DQC-Toolkit

Quality Checks for Training Data in Machine Learning

33
/ 100
Emerging

This toolkit helps machine learning engineers and data scientists improve the accuracy of their AI models by automatically finding and fixing mistakes in their training data. You provide your text-based training dataset, and it tells you which labels are incorrect or provides a confidence score for free-text labels. This ensures your models learn from clean, reliable information.

No commits in the last 6 months. Available on PyPI.

Use this if you are building text classification models or working with large language models and suspect your training data contains errors or needs label quality assessment.

Not ideal if your dataset does not involve text, or if you are looking for a general-purpose data cleaning tool for numerical or image data.

data-quality machine-learning-engineering natural-language-processing dataset-curation model-training
Stale 6m
Maintenance 0 / 25
Adoption 8 / 25
Maturity 25 / 25
Community 0 / 25

How are scores calculated?

Stars

7

Forks

Language

Jupyter Notebook

License

MIT

Last pushed

Oct 02, 2024

Monthly downloads

55

Commits (30d)

0

Dependencies

6

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/sumanthprabhu/DQC-Toolkit"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.