Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
This project helps data professionals evaluate and improve the fairness of their machine learning models. You input your datasets and models, and it provides metrics to detect potential biases and offers algorithms to reduce them. This is for data scientists, machine learning engineers, and risk managers who need to ensure their AI systems are equitable.
2,763 stars. Used by 3 other packages. Available on PyPI.
Use this if you are building or deploying AI models in critical areas like finance, HR, healthcare, or education and need to proactively identify and mitigate unfair biases.
Not ideal if you are looking for a simple, out-of-the-box solution without understanding machine learning concepts, as it requires expertise to interpret metrics and apply mitigation techniques effectively.
Stars
2,763
Forks
902
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 13, 2025
Commits (30d)
0
Dependencies
5
Reverse dependents
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Trusted-AI/AIF360"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
EFS-OpenSource/Thetis
Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines)...
IBM/inFairness
PyTorch package to train and audit ML models for Individual Fairness