mazzzystar/TurtleBench

TurtleBench: Evaluating Top Language Models via Real-World Yes/No Puzzles.

39
/ 100
Emerging

This project helps AI researchers and developers assess how well large language models (LLMs) can reason, specifically with yes/no questions. It takes real-world 'Turtle Soup' puzzles, which require logical deduction rather than factual knowledge, and evaluates an LLM's responses. The output is a clear, quantifiable score indicating how accurately the LLM answered these challenging puzzles, allowing for unbiased comparison of different models.

163 stars. No commits in the last 6 months.

Use this if you need an objective way to benchmark and compare the logical reasoning abilities of various large language models using real, user-generated puzzles.

Not ideal if you are looking to evaluate a language model's ability to recall factual information or generate creative text, as it focuses specifically on yes/no logical puzzles.

AI evaluation LLM benchmarking reasoning assessment model comparison NLP research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

163

Forks

15

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Oct 16, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/mazzzystar/TurtleBench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.