llm-ring/lmring
Open-source, self-hostable LLM arena with model compare, voting, and leaderboards
This tool helps you evaluate and compare different AI models (like ChatGPT or Google Gemini) for tasks involving text, images, or even video generation. You can submit the same prompt to multiple models, see their responses side-by-side, and then vote on which one performs best. This is ideal for researchers, developers, or businesses who need to select the most suitable AI model for a specific application.
Use this if you need a systematic way to test, compare, and rank the performance of various AI models for generating text, images, or video.
Not ideal if you're only looking to use a single AI model or if your primary need is for advanced AI model training or fine-tuning, rather than comparative evaluation.
Stars
8
Forks
3
Language
TypeScript
License
Apache-2.0
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/llm-ring/lmring"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
betagouv/ComparIA
Open source LLM arena created by the French Government
Skytliang/Multi-Agents-Debate
MAD: The first work to explore Multi-Agent Debate with Large Language Models :D
liuxiaotong/ai-dataset-radar
Multi-source async competitive intelligence engine for AI training data ecosystems with...
Arnoldlarry15/ARES-Dashboard
AI Red Team Operations Console
YerbaPage/SWE-Debate
SWE-Debate: Competitive Multi-Agent Debate for Software Issue Resolution