litellm and lm-proxy
LiteLM is a mature, feature-rich production gateway handling 100+ providers with enterprise capabilities (cost tracking, guardrails, load balancing), while lm-proxy is a lightweight, minimal OpenAI-compatible wrapper—making them direct competitors for the same use case (multi-provider LLM gateway), though targeting different scales and complexity requirements.
About litellm
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]
This project helps developers integrate over 100 large language models (LLMs) and AI agents into their applications without worrying about API differences. It takes requests in a standardized format, routes them to various LLM providers, and handles responses, enabling easier management and deployment of AI-powered features. Developers and AI engineers building diverse applications powered by multiple LLMs are the primary users.
About lm-proxy
Nayjest/lm-proxy
OpenAI-compatible HTTP LLM proxy / gateway for multi-provider inference (Google, Anthropic, OpenAI, PyTorch). Lightweight, extensible Python/FastAPI—use as library or standalone service.
This tool helps developers and system architects manage their use of Large Language Models (LLMs) from various providers like OpenAI, Anthropic, or Google, as well as local models. It acts as a single access point, allowing you to send requests using the familiar OpenAI API format, and the proxy intelligently routes them to the correct LLM. You input your LLM requests and configuration, and it outputs responses from the chosen models, simplifying multi-provider setups.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work