Visual-AI/GAMEBoT

[ACL 2025] GAMEBoT: Transparent Assessment of LLM Reasoning in Games

23
/ 100
Experimental

This tool helps researchers and AI developers transparently evaluate how well different Large Language Models (LLMs) reason and strategize. You input an LLM, and it outputs detailed game logs, visualizations, and performance metrics from playing various games like Checkers, Connect 4, and Poker. It's designed for anyone working on LLM development or evaluating their strategic capabilities beyond simple task completion.

No commits in the last 6 months.

Use this if you need to understand not just if an LLM can win a game, but *how* it thinks and strategizes during gameplay.

Not ideal if you're looking for a simple win/loss benchmark or a tool to evaluate LLM performance on non-game-based tasks.

LLM evaluation AI research strategic reasoning game AI model comparison
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 6 / 25

How are scores calculated?

Stars

31

Forks

2

Language

Python

License

Last pushed

Aug 17, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Visual-AI/GAMEBoT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.