Mihir3009/GridPuzzle

An evaluation dataset comprising of 274 grid-based puzzles with different complexities

28
/ 100
Experimental

This project provides a specialized dataset of 274 grid-based puzzles designed to rigorously test how well large language models (LLMs) think through complex problems, not just their final answers. It includes puzzle questions, the correct solutions, and the step-by-step reasoning chains generated by various LLMs like GPT-4, Claude-3, and Gemini. If you're a researcher or engineer working on improving the reasoning abilities of AI models, this dataset helps you pinpoint exactly where an LLM's logic breaks down.

No commits in the last 6 months.

Use this if you need a comprehensive benchmark to evaluate and diagnose the logical reasoning capabilities of large language models, focusing on their intermediate thought processes for solving grid puzzles.

Not ideal if you are looking for a dataset of text-based or real-world problem-solving scenarios, as this dataset is exclusively focused on grid-based logical puzzles.

AI Reasoning Evaluation Large Language Model Benchmarking Cognitive AI Research Logical Puzzle Solving Model Error Analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

License

MIT

Last pushed

Jun 25, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Mihir3009/GridPuzzle"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.