lavantien/llm-tournament
Simple and blazingly fast dynamic evaluation platform for benchmarking Large Language Models
This platform helps you compare how different Large Language Models (LLMs) perform on your specific tasks. You input your prompts and the LLMs you want to test, and it provides a clear breakdown of how each model scores, either through manual review or automated evaluation by other AI judges. It's designed for anyone who needs to pick the best LLM for their application, whether you're building a chatbot, content generator, or analysis tool.
Use this if you need to systematically benchmark various Large Language Models (LLMs) to understand their strengths and weaknesses for your particular use cases, with options for both human and AI-driven scoring.
Not ideal if you're looking for a simple, single-shot evaluation or if you only need to compare a couple of LLMs without detailed metrics or ongoing tracking.
Stars
8
Forks
2
Language
Go
License
MIT
Category
Last pushed
Jan 31, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/lavantien/llm-tournament"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems