KID-22/LLM-IR-Bias-Fairness-Survey
This is the repo for the survey of Bias and Fairness in IR with LLMs.
This resource collects and organizes academic papers focusing on bias and unfairness when Large Language Models (LLMs) are used in Information Retrieval (IR) systems. It takes in research papers and categorizes them by specific types of bias (e.g., in data collection, model development, result evaluation) and unfairness, helping researchers and practitioners understand emerging challenges. The target users are researchers, PhD students, and data scientists working on fairness in AI, particularly within search and recommendation systems.
No commits in the last 6 months.
Use this if you are researching or developing AI systems that use LLMs for information retrieval and need to understand, identify, and address issues of bias and unfairness.
Not ideal if you are a casual user looking for practical, out-of-the-box solutions for immediate bias mitigation in existing commercial LLM applications without a research focus.
Stars
59
Forks
3
Language
—
License
MIT
Category
Last pushed
Sep 04, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/KID-22/LLM-IR-Bias-Fairness-Survey"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cvs-health/langfair
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
BetterForAll/HonestyMeter
HonestyMeter: An NLP-powered framework for evaluating objectivity and bias in media content,...
bws82/biasclear
Structural bias detection and correction engine built on Persistent Influence Theory (PIT)
Hanpx20/SafeSwitch
Official code repository for the paper "Internal Activation as the Polar Star for Steering...
faiyazabdullah/TranslationTangles
Uncovering Performance Gaps and Bias Patterns in LLM-Based Translations Across Language Families...