algorithmicsuperintelligence/optillm

Optimizing inference proxy for LLMs

62
/ 100
Established

This tool acts as a smart go-between for your existing Large Language Model (LLM) services, such as OpenAI. It takes your standard LLM requests and processes them using advanced reasoning techniques to produce significantly more accurate answers, especially for complex tasks like math, coding, and logical problems. Anyone using LLMs for critical reasoning, problem-solving, or content generation would find this beneficial, including researchers, data scientists, and developers building LLM applications.

3,377 stars. Actively maintained with 6 commits in the last 30 days.

Use this if you need to dramatically improve the accuracy of your LLM's outputs on reasoning tasks without having to train or fine-tune models.

Not ideal if your primary concern is minimizing inference latency or if your LLM tasks are simple and don't require complex reasoning.

LLM application development AI research reasoning automation problem solving code generation
No Package No Dependents
Maintenance 17 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

3,377

Forks

265

Language

Python

License

Apache-2.0

Last pushed

Jan 28, 2026

Commits (30d)

6

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/algorithmicsuperintelligence/optillm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.