mlr-org/mlr3fairness

mlr3 extension for Fairness in Machine Learning

34
/ 100
Emerging

This tool helps data scientists, machine learning engineers, and ethical AI specialists ensure their automated decision-making systems are fair. It takes your machine learning model's predictions and a 'protected attribute' like gender or race, then tells you if the model's performance (like accuracy or false positive rates) differs unfairly across these groups. You get visualizations and metrics to diagnose bias, and can apply debiasing methods to improve fairness.

No commits in the last 6 months.

Use this if you are building or deploying machine learning models in sensitive areas like credit scoring, HR, or judicial systems, and need to identify and mitigate biases that could lead to unfair outcomes for different demographic groups.

Not ideal if you're not already working with machine learning models and data, or if your primary concern is bias originating from data collection rather than model performance.

ethical-AI bias-detection algorithmic-fairness risk-assessment HR-tech
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

15

Forks

2

Language

HTML

License

LGPL-3.0

Category

mlr3-ecosystem

Last pushed

Jun 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mlr-org/mlr3fairness"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.