pliang279/LM_bias
[ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models
This project helps evaluate and reduce unwanted social biases, such as gender stereotypes, that can appear in the text generated by large language models like GPT-2. It takes generated text or language model embeddings as input and outputs scores quantifying the biases present. Researchers and ethicists working with AI and natural language processing would use this.
No commits in the last 6 months.
Use this if you need to measure and reduce social biases in text generated by AI language models for fairness and ethical reasons.
Not ideal if you are looking to address biases in non-textual data or within the training data itself, rather than in the generated output.
Stars
61
Forks
10
Language
Python
License
MIT
Category
Last pushed
Nov 02, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/pliang279/LM_bias"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations...
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
datamllab/awesome-fairness-in-ai
A curated list of awesome Fairness in AI resources