sumanthprabhu/DQC-Toolkit
Quality Checks for Training Data in Machine Learning
This toolkit helps machine learning engineers and data scientists improve the accuracy of their AI models by automatically finding and fixing mistakes in their training data. You provide your text-based training dataset, and it tells you which labels are incorrect or provides a confidence score for free-text labels. This ensures your models learn from clean, reliable information.
No commits in the last 6 months. Available on PyPI.
Use this if you are building text classification models or working with large language models and suspect your training data contains errors or needs label quality assessment.
Not ideal if your dataset does not involve text, or if you are looking for a general-purpose data cleaning tool for numerical or image data.
Stars
7
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Oct 02, 2024
Monthly downloads
55
Commits (30d)
0
Dependencies
6
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/sumanthprabhu/DQC-Toolkit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
skrub-data/skrub
Machine learning with dataframes
biolab/orange3
🍊 :bar_chart: :bulb: Orange: Interactive data analysis
root-project/root
The official repository for ROOT: analyzing, storing and visualizing big data, scientifically
cleanlab/cleanlab
Cleanlab's open-source library is the standard data-centric AI package for data quality and...
drivendataorg/deon
A command line tool to easily add an ethics checklist to your data science projects.