fairlearn and inFairness
These are complementary tools addressing different fairness paradigms—fairlearn focuses on group fairness metrics (demographic parity, equalized odds) while inFairness specializes in individual fairness constraints (similarity-based fairness during training)—making them suitable for use together depending on which fairness definition your use case requires.
About fairlearn
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
This tool helps AI system developers and data scientists evaluate and improve the fairness of their machine learning models. You provide an existing AI model and information about the groups you want to assess for fairness, and it outputs metrics quantifying potential biases and offers algorithms to mitigate unfairness. It's designed for anyone building AI systems for sensitive applications like hiring or lending.
About inFairness
IBM/inFairness
PyTorch package to train and audit ML models for Individual Fairness
Building fair machine learning models is crucial. This package helps data scientists and ML engineers ensure their models treat similar individuals similarly, preventing biased or discriminatory outcomes. You provide your model and data, and it helps you audit, train, and adjust the model to meet individual fairness criteria. The output is a more equitable and trustworthy ML model.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work