fairlearn/fairlearn

A Python package to assess and improve fairness of machine learning models.

78
/ 100
Verified

This tool helps AI system developers and data scientists evaluate and improve the fairness of their machine learning models. You provide an existing AI model and information about the groups you want to assess for fairness, and it outputs metrics quantifying potential biases and offers algorithms to mitigate unfairness. It's designed for anyone building AI systems for sensitive applications like hiring or lending.

2,213 stars. Used by 9 other packages. Actively maintained with 2 commits in the last 30 days. Available on PyPI.

Use this if you are developing an AI system and need to ensure it treats different groups of people equitably, avoiding issues like biased loan approvals or unequal service quality.

Not ideal if you are looking for a non-technical solution to understand general ethical AI principles or if your primary concern is not related to quantifiable group-based fairness in AI model predictions.

AI-ethics responsible-AI bias-detection machine-learning-fairness data-science
Maintenance 13 / 25
Adoption 15 / 25
Maturity 25 / 25
Community 25 / 25

How are scores calculated?

Stars

2,213

Forks

484

Language

Python

License

MIT

Last pushed

Mar 12, 2026

Commits (30d)

2

Dependencies

5

Reverse dependents

9

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/fairlearn/fairlearn"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.