dccuchile/wefe
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
Helps developers working with natural language understand and reduce unfair biases in AI models. It takes text data, processes it through word embedding models, and then outputs various metrics that highlight biases related to gender, race, or other attributes. Data scientists, machine learning engineers, and NLP researchers can use this to build more equitable AI systems.
183 stars. Available on PyPI.
Use this if you are developing AI models that process language and need to systematically evaluate and mitigate social biases embedded in your word representations.
Not ideal if you are a non-developer seeking an out-of-the-box solution to audit or debias a deployed AI product without direct access to the underlying model code.
Stars
183
Forks
14
Language
Python
License
MIT
Category
Last pushed
Nov 24, 2025
Commits (30d)
0
Dependencies
9
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/dccuchile/wefe"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
dreji18/Fairness-in-AI
Detecting Bias and ensuring Fairness in AI solutions
amazon-science/bold
Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language...
dhfbk/variationist
Variationist: Exploring Multifaceted Variation and Bias in Written Language Data (ACL 2024 demo track)
soarsmu/BiasFinder
BiasFinder | IEEE TSE | Metamorphic Test Generation to Uncover Bias for Sentiment Analysis Systems
microsoft/SafeNLP
Safety Score for Pre-Trained Language Models