fairlearn and inFairness

These are complementary tools addressing different fairness paradigms—fairlearn focuses on group fairness metrics (demographic parity, equalized odds) while inFairness specializes in individual fairness constraints (similarity-based fairness during training)—making them suitable for use together depending on which fairness definition your use case requires.

fairlearn
78
Verified
inFairness
49
Emerging
Maintenance 13/25
Adoption 15/25
Maturity 25/25
Community 25/25
Maintenance 2/25
Adoption 9/25
Maturity 25/25
Community 13/25
Stars: 2,213
Forks: 484
Downloads:
Commits (30d): 2
Language: Python
License: MIT
Stars: 66
Forks: 8
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
Stale 6m

About fairlearn

fairlearn/fairlearn

A Python package to assess and improve fairness of machine learning models.

This tool helps AI system developers and data scientists evaluate and improve the fairness of their machine learning models. You provide an existing AI model and information about the groups you want to assess for fairness, and it outputs metrics quantifying potential biases and offers algorithms to mitigate unfairness. It's designed for anyone building AI systems for sensitive applications like hiring or lending.

AI-ethics responsible-AI bias-detection machine-learning-fairness data-science

About inFairness

IBM/inFairness

PyTorch package to train and audit ML models for Individual Fairness

Building fair machine learning models is crucial. This package helps data scientists and ML engineers ensure their models treat similar individuals similarly, preventing biased or discriminatory outcomes. You provide your model and data, and it helps you audit, train, and adjust the model to meet individual fairness criteria. The output is a more equitable and trustworthy ML model.

ethical-AI fairness-auditing responsible-ML bias-mitigation model-governance

Scores updated daily from GitHub, PyPI, and npm data. How scores work