Visual-AI/GAMEBoT
[ACL 2025] GAMEBoT: Transparent Assessment of LLM Reasoning in Games
This tool helps researchers and AI developers transparently evaluate how well different Large Language Models (LLMs) reason and strategize. You input an LLM, and it outputs detailed game logs, visualizations, and performance metrics from playing various games like Checkers, Connect 4, and Poker. It's designed for anyone working on LLM development or evaluating their strategic capabilities beyond simple task completion.
No commits in the last 6 months.
Use this if you need to understand not just if an LLM can win a game, but *how* it thinks and strategizes during gameplay.
Not ideal if you're looking for a simple win/loss benchmark or a tool to evaluate LLM performance on non-game-based tasks.
Stars
31
Forks
2
Language
Python
License
—
Category
Last pushed
Aug 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Visual-AI/GAMEBoT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems