Llm Evaluation Benchmarking AI Coding Tools

There are 3 llm evaluation benchmarking tools tracked. The highest-rated is greynewell/matchspec at 44/100 with 22 stars.

Get all 3 projects as JSON

curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=ai-coding&subcategory=llm-evaluation-benchmarking&limit=20"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

# Tool Score Tier
1 greynewell/matchspec

Eval framework. Define correct, test against it, get results.

44
Emerging
2 adrianlol7/evaldriven.org

Define, measure, and enforce code correctness with Eval-Driven Development,...

22
Experimental
3 wheldnz/next-evals-oss

🧩 Evaluate Next.js code quality using popular AI models with ease. Get...

14
Experimental