guldoganozgur/ei_fairness
Code for the paper "Equal Improvability: A New Fairness Notion Considering the Long-term Impact". Poster at ICLR 2023
This project helps researchers and practitioners in machine learning evaluate and improve the long-term fairness of their AI models. It takes in datasets and model outputs, then helps analyze how different groups are impacted over time, revealing whether all groups have an equal chance to improve their outcomes. This is useful for data scientists and AI ethicists concerned with equitable AI.
No commits in the last 6 months.
Use this if you are developing or deploying AI systems and need to rigorously assess and quantify fairness, particularly how decisions might create or perpetuate disparities across different user groups over time.
Not ideal if you are looking for a plug-and-play solution for general data analysis or a fairness tool for non-AI applications.
Stars
7
Forks
4
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jan 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/guldoganozgur/ei_fairness"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations...
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
EFS-OpenSource/Thetis
Service to examine data processing pipelines (e.g., machine learning or deep learning pipelines)...