zjunlp/BiasEdit

[TrustNLP@NAACL 2025] BiasEdit: Debiasing Stereotyped Language Models via Model Editing

28
/ 100
Experimental

Language models sometimes generate biased or stereotypical text. This project helps researchers and developers remove harmful stereotypes, such as gender or race bias, from large language models without compromising their overall language abilities. You input a pre-trained language model and a dataset designed to identify bias, and it outputs a refined, less-biased language model ready for use in applications.

No commits in the last 6 months.

Use this if you need to reduce or eliminate specific biases from your language models to ensure fair and ethical AI outputs.

Not ideal if you are looking for a general-purpose language model fine-tuning tool rather than a specialized bias mitigation solution.

ethical-ai natural-language-processing bias-mitigation language-model-refinement ai-fairness
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 12 / 25

How are scores calculated?

Stars

18

Forks

3

Language

Python

License

Last pushed

Sep 30, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/zjunlp/BiasEdit"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.