Ammaar-Alam/minebench
Minecraft-style voxel benchmark for comparing AI models (Arena + Sandbox)
This tool helps AI researchers and developers evaluate how well their large language models (LLMs) understand and reason about 3D space. You provide a natural language prompt describing a structure, and the system outputs 3D coordinates which are then visualized as a Minecraft-style build. Researchers can use this to benchmark and compare different AI models' spatial reasoning capabilities through visual results and a ranking system.
120 stars.
Use this if you need to objectively compare and rank AI models based on their ability to translate textual descriptions into complex 3D structures.
Not ideal if your primary interest is in evaluating language models solely on text generation, coding, or other non-spatial tasks.
Stars
120
Forks
7
Language
TypeScript
License
MIT
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Ammaar-Alam/minebench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems