ies-research/multi-annotator-machine-learning
Training with Data Annotated by Multipe Error-prone Annotators
This tool helps researchers and data scientists build machine learning models using datasets labeled by multiple human annotators, even when those annotators make mistakes. It takes your raw data along with the multiple, potentially conflicting, labels for each item and produces a more accurate trained model. It's designed for anyone who relies on crowd-sourced or expert human labeling to train their AI systems.
No commits in the last 6 months.
Use this if you are training a machine learning model and your labeled data comes from several human annotators whose judgments might not always agree or be perfectly accurate.
Not ideal if your dataset has perfectly accurate, single-source labels, or if you are not working with data from multiple annotators.
Stars
12
Forks
2
Language
Python
License
BSD-3-Clause
Category
Last pushed
Jul 15, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ies-research/multi-annotator-machine-learning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cvat-ai/cvat
Annotate better with CVAT, the industry-leading data engine for machine learning. Used and...
HumanSignal/label-studio
Label Studio is a multi-type data labeling and annotation tool with standardized output format
wkentaro/labelme
Image annotation with Python. Supports polygon, rectangle, circle, line, point, and AI-assisted...
CVHub520/X-AnyLabeling
Effortless data labeling with AI support from Segment Anything and other awesome models.
doccano/doccano
Open source annotation tool for machine learning practitioners.