GAIR-NLP/benbench

Benchmarking Benchmark Leakage in Large Language Models

23
/ 100
Experimental

This project helps evaluate if large language models (LLMs) have been trained on specific benchmark datasets, which can make their performance seem better than it is. It takes an LLM and a benchmark dataset as input and outputs an analysis of potential data leakage, including a 'Benchmark Transparency Card' to understand the model's training data. This is for researchers, academics, or anyone evaluating and comparing the true capabilities of different LLMs.

No commits in the last 6 months.

Use this if you need to accurately assess if an LLM's high performance on a mathematical reasoning benchmark is due to genuine capability or because it was trained on the benchmark's data.

Not ideal if you are looking for a tool to develop new LLM architectures or improve existing model performance on specific tasks.

LLM evaluation AI ethics model transparency benchmark analysis responsible AI
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 7 / 25

How are scores calculated?

Stars

60

Forks

3

Language

JavaScript

License

Last pushed

May 20, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/GAIR-NLP/benbench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.