lechmazur/nyt-connections
Benchmark that evaluates LLMs using 759 NYT Connections puzzles extended with extra trick words
This project provides a comprehensive benchmark for evaluating large language models (LLMs) on their ability to solve NYT Connections puzzles. It takes a list of words from Connections puzzles, including 'trick' words, and outputs a score indicating how well different LLMs perform. This is primarily useful for AI researchers, language model developers, and data scientists who need to compare and improve reasoning capabilities of different LLMs.
199 stars.
Use this if you need to rigorously test and compare the word association and categorical reasoning skills of various large language models using a challenging, expanded dataset.
Not ideal if you are looking for an interactive tool to play NYT Connections, as this is a benchmark, not a game.
Stars
199
Forks
8
Language
Python
License
—
Category
Last pushed
Mar 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/lechmazur/nyt-connections"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)