bowen-upenn/llm_token_bias

[EMNLP 2024] A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners

30
/ 100
Emerging

This project offers a testing framework to determine if large language models (LLMs) genuinely understand reasoning tasks or merely rely on superficial patterns in their input. It takes various reasoning problems and systematically alters seemingly irrelevant words to see if the LLM's answers change. The output indicates if an LLM is susceptible to 'token bias,' suggesting a lack of true understanding. Researchers, evaluators, and practitioners working with LLMs would use this to assess model robustness beyond simple accuracy scores.

No commits in the last 6 months.

Use this if you need to rigorously evaluate whether an LLM's reasoning capabilities are genuine or if the model is just picking up on specific words and phrases.

Not ideal if you are looking for methods to improve an LLM's reasoning abilities directly or to benchmark its performance on standard, unaltered tasks.

LLM evaluation AI ethics cognitive science model robustness reasoning assessment
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

26

Forks

2

Language

Python

License

MIT

Last pushed

Dec 11, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/bowen-upenn/llm_token_bias"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.