greynewell/infermux
Route inference across LLM providers. Track cost per request.
This tool helps developers manage how their applications interact with large language models (LLMs) from various providers. It takes an LLM request and intelligently directs it to the best available model, abstracting away the underlying LLM provider. Developers building applications that use LLMs can use this to streamline their backend operations.
Use this if you are building an application that needs to use multiple large language model providers and you want a single, consistent way to send requests and track usage.
Not ideal if you are a casual user looking for a direct interface to an LLM, or if your application only interacts with a single LLM provider.
Stars
89
Forks
7
Language
Go
License
MIT
Category
Last pushed
Feb 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/greynewell/infermux"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...