IBM/inFairness
PyTorch package to train and audit ML models for Individual Fairness
Building fair machine learning models is crucial. This package helps data scientists and ML engineers ensure their models treat similar individuals similarly, preventing biased or discriminatory outcomes. You provide your model and data, and it helps you audit, train, and adjust the model to meet individual fairness criteria. The output is a more equitable and trustworthy ML model.
Used by 1 other package. No commits in the last 6 months. Available on PyPI.
Use this if you are a data scientist or ML engineer concerned about your models making unfair predictions based on individual characteristics.
Not ideal if you are looking for solutions focused on group fairness or general model interpretability rather than individual-level fairness.
Stars
66
Forks
8
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 17, 2025
Commits (30d)
0
Dependencies
6
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/IBM/inFairness"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations...
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
EFS-OpenSource/Thetis
Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines)...