guldoganozgur/ei_fairness

Code for the paper "Equal Improvability: A New Fairness Notion Considering the Long-term Impact". Poster at ICLR 2023

35
/ 100
Emerging

This project helps researchers and practitioners in machine learning evaluate and improve the long-term fairness of their AI models. It takes in datasets and model outputs, then helps analyze how different groups are impacted over time, revealing whether all groups have an equal chance to improve their outcomes. This is useful for data scientists and AI ethicists concerned with equitable AI.

No commits in the last 6 months.

Use this if you are developing or deploying AI systems and need to rigorously assess and quantify fairness, particularly how decisions might create or perpetuate disparities across different user groups over time.

Not ideal if you are looking for a plug-and-play solution for general data analysis or a fairness tool for non-AI applications.

AI ethics algorithmic fairness machine learning research data science equity assessment
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

7

Forks

4

Language

Jupyter Notebook

License

MIT

Last pushed

Jan 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/guldoganozgur/ei_fairness"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.