fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
This tool helps AI system developers and data scientists evaluate and improve the fairness of their machine learning models. You provide an existing AI model and information about the groups you want to assess for fairness, and it outputs metrics quantifying potential biases and offers algorithms to mitigate unfairness. It's designed for anyone building AI systems for sensitive applications like hiring or lending.
2,213 stars. Used by 9 other packages. Actively maintained with 2 commits in the last 30 days. Available on PyPI.
Use this if you are developing an AI system and need to ensure it treats different groups of people equitably, avoiding issues like biased loan approvals or unequal service quality.
Not ideal if you are looking for a non-technical solution to understand general ethical AI principles or if your primary concern is not related to quantifiable group-based fairness in AI model predictions.
Stars
2,213
Forks
484
Language
Python
License
MIT
Category
Last pushed
Mar 12, 2026
Commits (30d)
2
Dependencies
5
Reverse dependents
9
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/fairlearn/fairlearn"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations...
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
EFS-OpenSource/Thetis
Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines)...
IBM/inFairness
PyTorch package to train and audit ML models for Individual Fairness