aida-ugent/fairret
A fairness library in PyTorch.
Fairret helps machine learning practitioners address potential biases in their AI models. It takes your existing PyTorch model and a definition of what fairness means for your specific situation. The output is a model that is more statistically fair in its predictions. This is for data scientists, machine learning engineers, and researchers building and deploying AI models where fairness is a critical concern.
No commits in the last 6 months.
Use this if you are developing or deploying AI models using PyTorch and need to measure and mitigate statistical biases in your model's predictions.
Not ideal if you are looking for a complete solution for real-world fairness challenges that go beyond statistical measures, or if you are not working with PyTorch models.
Stars
32
Forks
1
Language
Python
License
MIT
Category
Last pushed
Jul 23, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/aida-ugent/fairret"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations...
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
datamllab/awesome-fairness-in-ai
A curated list of awesome Fairness in AI resources