EqualityAI/EqualityML

Evidence-based tools and community collaboration to end algorithmic bias, one data scientist at a time.

38
/ 100
Emerging

This project provides practical tools and guidance for data scientists to detect and reduce unfairness in their machine learning models. It helps ensure that model outputs, such as predictions or classifications, are equitable across different groups of people. Data scientists can input their datasets and models, then use the provided metrics and methods to identify and mitigate bias, leading to more responsible and fair AI systems.

No commits in the last 6 months. Available on PyPI.

Use this if you are a data scientist building or deploying machine learning models and want to ensure they treat all user groups fairly and do not perpetuate or amplify societal biases.

Not ideal if you are looking for a general-purpose machine learning library without a specific focus on fairness, or if you are not working with tabular or structured data that influences decisions about people.

algorithmic-fairness responsible-ai bias-detection model-governance machine-learning-ethics
Stale 6m
Maintenance 0 / 25
Adoption 7 / 25
Maturity 25 / 25
Community 6 / 25

How are scores calculated?

Stars

35

Forks

2

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Oct 29, 2023

Commits (30d)

0

Dependencies

10

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/EqualityAI/EqualityML"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.