Mihir3009/GridPuzzle
An evaluation dataset comprising of 274 grid-based puzzles with different complexities
This project provides a specialized dataset of 274 grid-based puzzles designed to rigorously test how well large language models (LLMs) think through complex problems, not just their final answers. It includes puzzle questions, the correct solutions, and the step-by-step reasoning chains generated by various LLMs like GPT-4, Claude-3, and Gemini. If you're a researcher or engineer working on improving the reasoning abilities of AI models, this dataset helps you pinpoint exactly where an LLM's logic breaks down.
No commits in the last 6 months.
Use this if you need a comprehensive benchmark to evaluate and diagnose the logical reasoning capabilities of large language models, focusing on their intermediate thought processes for solving grid puzzles.
Not ideal if you are looking for a dataset of text-based or real-world problem-solving scenarios, as this dataset is exclusively focused on grid-based logical puzzles.
Stars
8
Forks
1
Language
—
License
MIT
Category
Last pushed
Jun 25, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Mihir3009/GridPuzzle"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-thought/reasoning-gym
[NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards
Hmbown/Hegelion
Dialectical reasoning architecture for LLMs (Thesis → Antithesis → Synthesis)
LLM360/Reasoning360
A repo for open research on building large reasoning models
TsinghuaC3I/Awesome-RL-for-LRMs
A Survey of Reinforcement Learning for Large Reasoning Models
bowang-lab/BioReason
BioReason: Incentivizing Multimodal Biological Reasoning within a DNA-LLM Model | NeurIPS '25