openai-forward and lm-proxy

These are competitors offering similar core functionality—both are API gateway/proxy services that forward requests to multiple LLM providers—though they differ in implementation language (Go vs. Python/FastAPI) and maturity levels.

openai-forward
60
Established
lm-proxy
55
Established
Maintenance 0/25
Adoption 10/25
Maturity 25/25
Community 25/25
Maintenance 10/25
Adoption 9/25
Maturity 24/25
Community 12/25
Stars: 988
Forks: 312
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 92
Forks: 10
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stale 6m
No risk flags

About openai-forward

KenyonY/openai-forward

🚀 大语言模型高效转发服务 · An efficient forwarding service designed for LLMs. · OpenAI API Reverse Proxy

This service helps developers and teams manage their interactions with large language models, whether they are hosted locally or in the cloud. It takes your requests for AI models like OpenAI or Google Gemini, and then efficiently forwards them, handling tasks like rate limiting, caching AI predictions, and managing API keys. The output is a faster, more controlled, and more cost-effective way to use these powerful models.

AI-application-development API-management LLM-operations backend-infrastructure developer-tools

About lm-proxy

Nayjest/lm-proxy

OpenAI-compatible HTTP LLM proxy / gateway for multi-provider inference (Google, Anthropic, OpenAI, PyTorch). Lightweight, extensible Python/FastAPI—use as library or standalone service.

This tool helps developers and system architects manage their use of Large Language Models (LLMs) from various providers like OpenAI, Anthropic, or Google, as well as local models. It acts as a single access point, allowing you to send requests using the familiar OpenAI API format, and the proxy intelligently routes them to the correct LLM. You input your LLM requests and configuration, and it outputs responses from the chosen models, simplifying multi-provider setups.

LLM management API integration backend development AI infrastructure multi-model deployment

Scores updated daily from GitHub, PyPI, and npm data. How scores work