changdaeoh/FarconVAE
Official implementation for KDD'22 paper "Learning Fair Representation via Distributional Contrastive Disentanglement"
FarconVAE helps data scientists and machine learning engineers create fairer AI models. It takes your existing dataset, including sensitive attributes like gender or race, and produces a debiased representation of that data. This representation can then be used to train AI models that make more equitable predictions, reducing bias in applications like loan approvals or hiring systems.
No commits in the last 6 months.
Use this if you need to build machine learning models that are less biased and make fairer decisions, particularly when dealing with sensitive personal data.
Not ideal if your primary goal is interpretability of individual features rather than overall fairness and debiasing, or if you don't have sensitive attribute information.
Stars
23
Forks
3
Language
Python
License
—
Category
Last pushed
Jun 25, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/changdaeoh/FarconVAE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
AdaptiveMotorControlLab/CEBRA
Learnable latent embeddings for joint behavioral and neural analysis - Official implementation of CEBRA
theolepage/sslsv
Toolkit for training and evaluating Self-Supervised Learning (SSL) frameworks for Speaker...
PaddlePaddle/PASSL
PASSL包含 SimCLR,MoCo v1/v2,BYOL,CLIP,PixPro,simsiam, SwAV, BEiT,MAE 等图像自监督算法以及 Vision...
YGZWQZD/LAMDA-SSL
30 Semi-Supervised Learning Algorithms
ModSSC/ModSSC
ModSSC: A Modular Framework for Semi Supervised Classification