KID-22/LLM-IR-Bias-Fairness-Survey

This is the repo for the survey of Bias and Fairness in IR with LLMs.

33
/ 100
Emerging

This resource collects and organizes academic papers focusing on bias and unfairness when Large Language Models (LLMs) are used in Information Retrieval (IR) systems. It takes in research papers and categorizes them by specific types of bias (e.g., in data collection, model development, result evaluation) and unfairness, helping researchers and practitioners understand emerging challenges. The target users are researchers, PhD students, and data scientists working on fairness in AI, particularly within search and recommendation systems.

No commits in the last 6 months.

Use this if you are researching or developing AI systems that use LLMs for information retrieval and need to understand, identify, and address issues of bias and unfairness.

Not ideal if you are a casual user looking for practical, out-of-the-box solutions for immediate bias mitigation in existing commercial LLM applications without a research focus.

AI fairness Information Retrieval Large Language Models Algorithmic Bias Responsible AI
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

59

Forks

3

Language

License

MIT

Last pushed

Sep 04, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/KID-22/LLM-IR-Bias-Fairness-Survey"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.