minnesotanlp/cobbler
Code and data for Koo et al's ACL 2024 paper "Benchmarking Cognitive Biases in Large Language Models as Evaluators"
This project helps AI researchers and developers systematically test how large language models (LLMs) evaluate content. It takes your LLM and a set of predefined cognitive bias tests, then outputs a performance benchmark showing how susceptible your model is to common human cognitive biases. The primary users are those developing and deploying LLMs, especially in evaluation roles.
No commits in the last 6 months.
Use this if you are developing or fine-tuning LLMs and need to understand how they might inadvertently introduce cognitive biases into their evaluations.
Not ideal if you are looking for a tool to detect cognitive biases in human evaluators or in text generated by LLMs without directly evaluating the LLM's own evaluative capacity.
Stars
22
Forks
2
Language
Jupyter Notebook
License
—
Category
Last pushed
Feb 16, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/minnesotanlp/cobbler"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cvs-health/langfair
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
BetterForAll/HonestyMeter
HonestyMeter: An NLP-powered framework for evaluating objectivity and bias in media content,...
bws82/biasclear
Structural bias detection and correction engine built on Persistent Influence Theory (PIT)
KID-22/LLM-IR-Bias-Fairness-Survey
This is the repo for the survey of Bias and Fairness in IR with LLMs.
Hanpx20/SafeSwitch
Official code repository for the paper "Internal Activation as the Polar Star for Steering...