JinjieNi/MixEval

The official evaluation suite and dynamic data release for MixEval.

37
/ 100
Emerging

This project offers a cost-effective and fast solution for evaluating large language models (LLMs) with high accuracy. It takes model responses to a set of dynamic prompts as input and provides a reliable performance score. AI researchers and developers who need to benchmark LLMs against established human preference ratings will find this particularly useful.

255 stars. No commits in the last 6 months.

Use this if you need to quickly and affordably evaluate your LLM's performance with a benchmark that strongly correlates with human preference rankings, without the high cost and time of methods like Chatbot Arena.

Not ideal if your evaluation needs extend beyond text-to-text generative models or if you require an 'any-to-any' benchmark that includes modalities other than text, in which case MixEval-X might be more suitable.

LLM-evaluation AI-benchmarking model-performance natural-language-processing machine-learning-research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 19 / 25

How are scores calculated?

Stars

255

Forks

41

Language

Python

License

Last pushed

Nov 10, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/JinjieNi/MixEval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.