ishida-lab/irreducible

[ICLR 2023] Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification

24
/ 100
Experimental

This helps machine learning practitioners determine the best possible performance for a binary classification model, considering the inherent uncertainty in the data. It takes in datasets with labels that reflect this uncertainty (e.g., multiple human annotations) and outputs an estimate of the Bayes error, which is the theoretical lower bound for classification error. Data scientists, machine learning engineers, and researchers can use this to benchmark their models and understand data difficulty.

No commits in the last 6 months.

Use this if you need to understand the theoretical limits of a classification model's performance on a specific dataset, especially when evaluating state-of-the-art deep networks or identifying test set overfitting.

Not ideal if you're looking for a tool to improve your model's accuracy directly or if you only have standard, single-label datasets without any information about label uncertainty.

machine-learning model-evaluation classification data-analysis deep-learning
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

22

Forks

Language

Python

License

GPL-3.0

Last pushed

Aug 12, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ishida-lab/irreducible"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.