aigc-apps/PertEval

[NeurIPS '24 Spotlight] PertEval: Unveiling Real Knowledge Capacity of LLMs via Knowledge-invariant Perturbations

31
/ 100
Emerging

This toolkit helps AI researchers and developers understand what an LLM truly "knows" by testing its knowledge in a very specific way. You feed it existing multiple-choice benchmark questions, and it generates slightly altered versions of these questions that shouldn't change the underlying knowledge required to answer. The output is a robust score of the LLM's real knowledge capacity and insights into why it might fail.

No commits in the last 6 months.

Use this if you need to rigorously evaluate the fundamental knowledge of large language models on close-ended benchmarks, going beyond surface-level accuracy to identify genuine understanding versus spurious correlations.

Not ideal if you are looking to evaluate LLMs on open-ended tasks, creative generation, or conversational ability, as this tool focuses specifically on probing knowledge via multiple-choice questions.

LLM evaluation AI model testing natural language processing research benchmark analysis model robustness
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

14

Forks

2

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Oct 30, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/aigc-apps/PertEval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.