andrescorrada/IntroductionToAlgebraicEvaluation

A collection of essays and code on algebraic methods to evaluate noisy judges on unlabeled test data.

43
/ 100
Emerging

This project offers a method to evaluate how well different decision-makers (like AI models or human experts) perform, even when you don't know the correct answers to the questions they're responding to. You input the observed agreements and disagreements between these 'noisy judges' on a test, and it helps you infer their individual correctness statistics. It's designed for anyone who needs to assess the reliability of multiple decision-makers on unlabeled data.

Use this if you need to reliably evaluate the performance of multiple human or machine agents, especially AI systems, on a test where the true answers are unknown.

Not ideal if you already have perfectly labeled data and are using traditional, supervised evaluation methods.

AI-safety model-evaluation crowd-sourcing-quality expert-review-assessment unlabeled-data-analysis
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

37

Forks

4

Language

Jupyter Notebook

License

CC0-1.0

Last pushed

Feb 25, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/andrescorrada/IntroductionToAlgebraicEvaluation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.