mlr-org/mlr3fairness
mlr3 extension for Fairness in Machine Learning
This tool helps data scientists, machine learning engineers, and ethical AI specialists ensure their automated decision-making systems are fair. It takes your machine learning model's predictions and a 'protected attribute' like gender or race, then tells you if the model's performance (like accuracy or false positive rates) differs unfairly across these groups. You get visualizations and metrics to diagnose bias, and can apply debiasing methods to improve fairness.
No commits in the last 6 months.
Use this if you are building or deploying machine learning models in sensitive areas like credit scoring, HR, or judicial systems, and need to identify and mitigate biases that could lead to unfair outcomes for different demographic groups.
Not ideal if you're not already working with machine learning models and data, or if your primary concern is bias originating from data collection rather than model performance.
Stars
15
Forks
2
Language
HTML
License
LGPL-3.0
Category
Last pushed
Jun 24, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mlr-org/mlr3fairness"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
GAA-UAM/scikit-fda
Functional Data Analysis Python package
mlr-org/mlr3
mlr3: Machine Learning in R - next generation
mlr-org/mlr3extralearners
Extra learners for use in mlr3.
mlr-org/mlr3book
Online version of Bischl, B., Sonabend, R., Kotthoff, L., & Lang, M. (Eds.). (2024). "Applied...
mlr-org/mlr3learners
Recommended learners for mlr3