psunlpgroup/ReaLMistake

This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".

32
/ 100
Emerging

This project offers a dataset of common errors made by large language models like GPT-4 and Llama 2 70B across tasks like math problem solving, fact checking, and question answering. It takes an LLM's input and response, then provides human expert judgments on whether an error exists, its category, and a natural language explanation. It's for researchers and developers working on improving the reliability of LLMs by building better error detection systems.

No commits in the last 6 months.

Use this if you are developing or evaluating automated systems to identify and categorize errors in large language model outputs.

Not ideal if you are a general user looking for a tool to fix or debug your own LLM applications directly, as this is a benchmark dataset for research.

LLM evaluation AI quality assurance natural language processing research error analysis AI model auditing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

31

Forks

3

Language

Python

License

Last pushed

Aug 18, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/psunlpgroup/ReaLMistake"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.