JinjieNi/MixEval
The official evaluation suite and dynamic data release for MixEval.
This project offers a cost-effective and fast solution for evaluating large language models (LLMs) with high accuracy. It takes model responses to a set of dynamic prompts as input and provides a reliable performance score. AI researchers and developers who need to benchmark LLMs against established human preference ratings will find this particularly useful.
255 stars. No commits in the last 6 months.
Use this if you need to quickly and affordably evaluate your LLM's performance with a benchmark that strongly correlates with human preference rankings, without the high cost and time of methods like Chatbot Arena.
Not ideal if your evaluation needs extend beyond text-to-text generative models or if you require an 'any-to-any' benchmark that includes modalities other than text, in which case MixEval-X might be more suitable.
Stars
255
Forks
41
Language
Python
License
—
Category
Last pushed
Nov 10, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/JinjieNi/MixEval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
eth-sri/matharena
Evaluation of LLMs on latest math competitions
tatsu-lab/alpaca_eval
An automatic evaluator for instruction-following language models. Human-validated, high-quality,...
HPAI-BSC/TuRTLe
TuRTLe: A Unified Evaluation of LLMs for RTL Generation 🐢 (MLCAD 2025)
nlp-uoregon/mlmm-evaluation
Multilingual Large Language Models Evaluation Benchmark
haesleinhuepf/human-eval-bia
Benchmarking Large Language Models for Bio-Image Analysis Code Generation