terryyz/llm-benchmark

A list of LLM benchmark frameworks.

35
/ 100
Emerging

This is a curated list of tools for evaluating Large Language Models (LLMs). It helps AI researchers, machine learning engineers, and data scientists choose the right benchmark for assessing an LLM's capabilities. You can compare various evaluation frameworks, understand their datasets, and select the most suitable one for your specific LLM project.

No commits in the last 6 months.

Use this if you are a researcher or engineer who needs to systematically compare the performance of different Large Language Models across various tasks and datasets.

Not ideal if you are looking for a tool to develop or fine-tune LLMs, as this focuses solely on evaluating existing models.

LLM evaluation AI research natural language processing machine learning engineering model benchmarking
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

73

Forks

6

Language

License

Apache-2.0

Last pushed

Feb 17, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/terryyz/llm-benchmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.