matdev83/llm-interactive-proxy
Connect any LLM-powered client app, such as a coding agent, to any supported inference backend/model.
This project helps developers connect their AI-powered applications, like coding agents, to various large language models (LLMs) from different providers. It takes your existing AI client application and routes its requests to your chosen LLM backend, adding features like security, cost control, and failover. Developers who build and manage AI applications will find this useful for greater flexibility and control over their LLM integrations.
Use this if you need to integrate your AI application with multiple LLM providers, add robust security, or gain more control and observability over your agentic workflows without rewriting your client code.
Not ideal if you are a casual user of a single LLM API and do not require advanced routing, security, or multi-provider management.
Stars
15
Forks
1
Language
Python
License
AGPL-3.0
Category
Last pushed
Mar 25, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/matdev83/llm-interactive-proxy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...