neutree-ai/llm-fighter
Evaluate LLM agentic capabilities through combat games
LLM Fighter helps you assess how effectively different large language models (LLMs) can make strategic decisions and follow rules by pitting them against each other in turn-based combat games. You input the API endpoints for two LLM agents, and the system simulates a battle, showing you which model performs better and how. This is for AI researchers, LLM evaluators, or anyone comparing or developing LLM-powered agents.
No commits in the last 6 months.
Use this if you need an engaging and concrete way to benchmark the 'agentic' capabilities and strategic reasoning of various LLMs.
Not ideal if you need to evaluate LLMs for tasks like content generation, summarization, or simple question-answering, as this focuses specifically on strategic decision-making.
Stars
12
Forks
2
Language
TypeScript
License
MIT
Category
Last pushed
Aug 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/neutree-ai/llm-fighter"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google-deepmind/concordia
A library for generative social simulation
Mai-xiyu/Minecraft_AI
AI Play Minecraft
mikelma/craftium
A framework for creating rich, 3D, Minecraft-like single and multi-agent environments for AI...
cocacola-lab/MineLand
Simulating Large-Scale Multi-Agent Interactions with Limited Multimodal Senses and Physical Needs
rezaho/MARSYS
Multi-Agent Reasoning Systems