neha13rana/Stereotypical-Bias-Analyzer
In this project, we analyzed biases in ten domains using four datasets and created a useful dataset of our own. Models like BERT and RoBERTa reveal significant biases, highlighting the importance of eliminating bias in natural language processing. Users can input a sentence to determine its bias types.
No commits in the last 6 months.
Stars
2
Forks
1
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Aug 28, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/neha13rana/Stereotypical-Bias-Analyzer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
yueqingliang1/UNBench
Data and code for paper "𝗕𝗲𝗻𝗰𝗵𝗺𝗮𝗿𝗸𝗶𝗻𝗴 𝗟𝗟𝗠𝘀 𝗳𝗼𝗿 𝗣𝗼𝗹𝗶𝘁𝗶𝗰𝗮𝗹 𝗦𝗰𝗶𝗲𝗻𝗰𝗲: 𝗔 𝗨𝗻𝗶𝘁𝗲𝗱 𝗡𝗮𝘁𝗶𝗼𝗻𝘀 𝗣𝗲𝗿𝘀𝗽𝗲𝗰𝘁𝗶𝘃𝗲".
zjunlp/BiasEdit
[TrustNLP@NAACL 2025] BiasEdit: Debiasing Stereotyped Language Models via Model Editing
MiuLab/FactAlign
Source code of our EMNLP 2024 paper "FactAlign: Long-form Factuality Alignment of Large Language Models"
josmarios/textbias-edu
Code for the article "From hype to evidence: exploring large language models for inter-group...