KenyonY/openai-forward
🚀 大语言模型高效转发服务 · An efficient forwarding service designed for LLMs. · OpenAI API Reverse Proxy
This service helps developers and teams manage their interactions with large language models, whether they are hosted locally or in the cloud. It takes your requests for AI models like OpenAI or Google Gemini, and then efficiently forwards them, handling tasks like rate limiting, caching AI predictions, and managing API keys. The output is a faster, more controlled, and more cost-effective way to use these powerful models.
988 stars. No commits in the last 6 months. Available on PyPI.
Use this if you are a developer or team lead looking to optimize, control, and secure access to various large language models for your applications or internal tools.
Not ideal if you are an individual user making occasional, direct calls to a single AI model without needing advanced management features.
Stars
988
Forks
312
Language
Python
License
MIT
Category
Last pushed
Mar 15, 2025
Commits (30d)
0
Dependencies
17
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/KenyonY/openai-forward"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...