algorithmicsuperintelligence/optillm
Optimizing inference proxy for LLMs
This tool acts as a smart go-between for your existing Large Language Model (LLM) services, such as OpenAI. It takes your standard LLM requests and processes them using advanced reasoning techniques to produce significantly more accurate answers, especially for complex tasks like math, coding, and logical problems. Anyone using LLMs for critical reasoning, problem-solving, or content generation would find this beneficial, including researchers, data scientists, and developers building LLM applications.
3,377 stars. Actively maintained with 6 commits in the last 30 days.
Use this if you need to dramatically improve the accuracy of your LLM's outputs on reasoning tasks without having to train or fine-tune models.
Not ideal if your primary concern is minimizing inference latency or if your LLM tasks are simple and don't require complex reasoning.
Stars
3,377
Forks
265
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 28, 2026
Commits (30d)
6
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/algorithmicsuperintelligence/optillm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
langfuse/langfuse
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management,...
Arize-ai/phoenix
AI Observability & Evaluation
Mirascope/mirascope
The LLM Anti-Framework
Agenta-AI/agenta
The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM...
Helicone/helicone
🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓