changdaeoh/FarconVAE

Official implementation for KDD'22 paper "Learning Fair Representation via Distributional Contrastive Disentanglement"

25
/ 100
Experimental

FarconVAE helps data scientists and machine learning engineers create fairer AI models. It takes your existing dataset, including sensitive attributes like gender or race, and produces a debiased representation of that data. This representation can then be used to train AI models that make more equitable predictions, reducing bias in applications like loan approvals or hiring systems.

No commits in the last 6 months.

Use this if you need to build machine learning models that are less biased and make fairer decisions, particularly when dealing with sensitive personal data.

Not ideal if your primary goal is interpretability of individual features rather than overall fairness and debiasing, or if you don't have sensitive attribute information.

algorithmic-fairness debiasing machine-learning-ethics data-privacy responsible-AI
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 11 / 25

How are scores calculated?

Stars

23

Forks

3

Language

Python

License

Last pushed

Jun 25, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/changdaeoh/FarconVAE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.