oripress/AlgoTune

AlgoTune is a NeurIPS 2025 benchmark made up of 154 math, physics, and computer science problems. The goal is write code that solves each problem, and is faster than existing implementations.

49
/ 100
Emerging

AlgoTune helps you benchmark how well large language models can optimize code for common math, physics, and computer science functions. You provide existing code and select an AI model; AlgoTune then outputs new code versions and detailed speed-up reports. This is for researchers and engineers who want to assess or improve the performance optimization capabilities of AI models.

Use this if you need to systematically evaluate how effectively large language models can generate faster, equivalent code for numerical problems.

Not ideal if you are looking for a tool to automatically fix bugs in your existing code or to generate code from natural language prompts without a focus on performance optimization.

algorithm-optimization computational-performance scientific-computing AI-code-generation performance-benchmarking
No Package No Dependents
Maintenance 10 / 25
Adoption 9 / 25
Maturity 15 / 25
Community 15 / 25

How are scores calculated?

Stars

95

Forks

13

Language

Python

License

MIT

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/oripress/AlgoTune"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.