dccuchile/wefe

WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!

53
/ 100
Established

Helps developers working with natural language understand and reduce unfair biases in AI models. It takes text data, processes it through word embedding models, and then outputs various metrics that highlight biases related to gender, race, or other attributes. Data scientists, machine learning engineers, and NLP researchers can use this to build more equitable AI systems.

183 stars. Available on PyPI.

Use this if you are developing AI models that process language and need to systematically evaluate and mitigate social biases embedded in your word representations.

Not ideal if you are a non-developer seeking an out-of-the-box solution to audit or debias a deployed AI product without direct access to the underlying model code.

natural-language-processing machine-learning-fairness computational-linguistics ai-ethics data-science
Maintenance 6 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 12 / 25

How are scores calculated?

Stars

183

Forks

14

Language

Python

License

MIT

Last pushed

Nov 24, 2025

Commits (30d)

0

Dependencies

9

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/dccuchile/wefe"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.