Trusted-AI/AIF360

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

69
/ 100
Established

This project helps data professionals evaluate and improve the fairness of their machine learning models. You input your datasets and models, and it provides metrics to detect potential biases and offers algorithms to reduce them. This is for data scientists, machine learning engineers, and risk managers who need to ensure their AI systems are equitable.

2,763 stars. Used by 3 other packages. Available on PyPI.

Use this if you are building or deploying AI models in critical areas like finance, HR, healthcare, or education and need to proactively identify and mitigate unfair biases.

Not ideal if you are looking for a simple, out-of-the-box solution without understanding machine learning concepts, as it requires expertise to interpret metrics and apply mitigation techniques effectively.

AI ethics fairness assessment bias mitigation responsible AI machine learning auditing
Maintenance 6 / 25
Adoption 13 / 25
Maturity 25 / 25
Community 25 / 25

How are scores calculated?

Stars

2,763

Forks

902

Language

Python

License

Apache-2.0

Last pushed

Nov 13, 2025

Commits (30d)

0

Dependencies

5

Reverse dependents

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Trusted-AI/AIF360"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.