lechmazur/step_game
Multi-Agent Step Race Benchmark: Assessing LLM Collaboration and Deception Under Pressure. A multi-player “step-race” that challenges LLMs to engage in public conversation before secretly picking a move (1, 3, or 5 steps). Whenever two or more players choose the same number, all colliding players fail to advance.
This project offers a competitive "step-race" game where three AI models (Large Language Models) engage in public conversation before secretly choosing to advance 1, 3, or 5 steps. If multiple models pick the same number, they all fail to advance that turn. It helps AI researchers and developers understand how different LLMs strategize, collaborate, and deceive under pressure by observing their conversational tactics and resulting moves on the board.
Use this if you are an AI researcher or developer looking to evaluate the social reasoning, negotiation, and strategic capabilities of Large Language Models in a dynamic, multi-agent environment.
Not ideal if you are looking for a simple benchmark of factual recall or a tool to test traditional coding abilities of LLMs.
Stars
85
Forks
2
Language
—
License
—
Category
Last pushed
Dec 09, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/lechmazur/step_game"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
StonyBrookNLP/appworld
🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and...
qualifire-dev/rogue
AI Agent Evaluator & Red Team Platform
microsoft/WindowsAgentArena
Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of...
future-agi/ai-evaluation
Evaluation Framework for all your AI related Workflows
RouteWorks/RouterArena
RouterArena: An open framework for evaluating LLM routers with standardized datasets, metrics,...