cornell-zhang/heurigym

Agentic Benchmark for LLM-Crafted Heuristics in Combinatorial Optimization (ICLR'26)

45
/ 100
Emerging

This project helps evaluate how effectively large language models (LLMs) can create and improve heuristics to solve complex real-world optimization challenges. It takes various combinatorial optimization problems, such as airline crew pairing or protein sequence design, and measures the quality of the heuristics generated by different LLMs. Researchers and practitioners working on applying LLMs to solve difficult optimization tasks would use this to benchmark and compare different LLM approaches.

Use this if you need a rigorous, objective way to benchmark different LLM agents' ability to solve practical, open-ended combinatorial optimization problems through code-driven interaction.

Not ideal if you are looking for an off-the-shelf solver for a specific optimization problem, or if your tasks involve simple, closed-form challenges.

Combinatorial Optimization Electronic Design Automation Computational Biology Logistics Planning Compiler Optimization
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

64

Forks

6

Language

Python

License

Apache-2.0

Last pushed

Mar 05, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/cornell-zhang/heurigym"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.