aida-ugent/fairret

A fairness library in PyTorch.

27
/ 100
Experimental

Fairret helps machine learning practitioners address potential biases in their AI models. It takes your existing PyTorch model and a definition of what fairness means for your specific situation. The output is a model that is more statistically fair in its predictions. This is for data scientists, machine learning engineers, and researchers building and deploying AI models where fairness is a critical concern.

No commits in the last 6 months.

Use this if you are developing or deploying AI models using PyTorch and need to measure and mitigate statistical biases in your model's predictions.

Not ideal if you are looking for a complete solution for real-world fairness challenges that go beyond statistical measures, or if you are not working with PyTorch models.

AI-ethics bias-mitigation machine-learning-fairness responsible-AI predictive-modeling
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 4 / 25

How are scores calculated?

Stars

32

Forks

1

Language

Python

License

MIT

Last pushed

Jul 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/aida-ugent/fairret"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.