neutree-ai/llm-fighter

Evaluate LLM agentic capabilities through combat games

33
/ 100
Emerging

LLM Fighter helps you assess how effectively different large language models (LLMs) can make strategic decisions and follow rules by pitting them against each other in turn-based combat games. You input the API endpoints for two LLM agents, and the system simulates a battle, showing you which model performs better and how. This is for AI researchers, LLM evaluators, or anyone comparing or developing LLM-powered agents.

No commits in the last 6 months.

Use this if you need an engaging and concrete way to benchmark the 'agentic' capabilities and strategic reasoning of various LLMs.

Not ideal if you need to evaluate LLMs for tasks like content generation, summarization, or simple question-answering, as this focuses specifically on strategic decision-making.

LLM evaluation AI agent development Model comparison Strategic AI LLM benchmarking
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 15 / 25
Community 11 / 25

How are scores calculated?

Stars

12

Forks

2

Language

TypeScript

License

MIT

Last pushed

Aug 02, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/neutree-ai/llm-fighter"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.