IBM/inFairness

PyTorch package to train and audit ML models for Individual Fairness

49
/ 100
Emerging

Building fair machine learning models is crucial. This package helps data scientists and ML engineers ensure their models treat similar individuals similarly, preventing biased or discriminatory outcomes. You provide your model and data, and it helps you audit, train, and adjust the model to meet individual fairness criteria. The output is a more equitable and trustworthy ML model.

Used by 1 other package. No commits in the last 6 months. Available on PyPI.

Use this if you are a data scientist or ML engineer concerned about your models making unfair predictions based on individual characteristics.

Not ideal if you are looking for solutions focused on group fairness or general model interpretability rather than individual-level fairness.

ethical-AI fairness-auditing responsible-ML bias-mitigation model-governance
Stale 6m
Maintenance 2 / 25
Adoption 9 / 25
Maturity 25 / 25
Community 13 / 25

How are scores calculated?

Stars

66

Forks

8

Language

Python

License

Apache-2.0

Last pushed

Sep 17, 2025

Commits (30d)

0

Dependencies

6

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/IBM/inFairness"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.