ies-research/multi-annotator-machine-learning

Training with Data Annotated by Multipe Error-prone Annotators

34
/ 100
Emerging

This tool helps researchers and data scientists build machine learning models using datasets labeled by multiple human annotators, even when those annotators make mistakes. It takes your raw data along with the multiple, potentially conflicting, labels for each item and produces a more accurate trained model. It's designed for anyone who relies on crowd-sourced or expert human labeling to train their AI systems.

No commits in the last 6 months.

Use this if you are training a machine learning model and your labeled data comes from several human annotators whose judgments might not always agree or be perfectly accurate.

Not ideal if your dataset has perfectly accurate, single-source labels, or if you are not working with data from multiple annotators.

crowd-sourcing data-labeling machine-learning-training human-in-the-loop-AI annotation-quality
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

12

Forks

2

Language

Python

License

BSD-3-Clause

Last pushed

Jul 15, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ies-research/multi-annotator-machine-learning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.