psunlpgroup/ReaLMistake
This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".
This project offers a dataset of common errors made by large language models like GPT-4 and Llama 2 70B across tasks like math problem solving, fact checking, and question answering. It takes an LLM's input and response, then provides human expert judgments on whether an error exists, its category, and a natural language explanation. It's for researchers and developers working on improving the reliability of LLMs by building better error detection systems.
No commits in the last 6 months.
Use this if you are developing or evaluating automated systems to identify and categorize errors in large language model outputs.
Not ideal if you are a general user looking for a tool to fix or debug your own LLM applications directly, as this is a benchmark dataset for research.
Stars
31
Forks
3
Language
Python
License
—
Category
Last pushed
Aug 18, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/psunlpgroup/ReaLMistake"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/langfun
OO for LLMs
tanaos/artifex
Small Language Model Inference, Fine-Tuning and Observability. No GPU, no labeled data needed.
preligens-lab/textnoisr
Adding random noise to a text dataset, and controlling very accurately the quality of the result
vulnerability-lookup/VulnTrain
A tool to generate datasets and models based on vulnerabilities descriptions from @Vulnerability-Lookup.
masakhane-io/masakhane-mt
Machine Translation for Africa