litellm and lm-proxy

LiteLM is a mature, feature-rich production gateway handling 100+ providers with enterprise capabilities (cost tracking, guardrails, load balancing), while lm-proxy is a lightweight, minimal OpenAI-compatible wrapper—making them direct competitors for the same use case (multi-provider LLM gateway), though targeting different scales and complexity requirements.

litellm
85
Verified
lm-proxy
55
Established
Maintenance 22/25
Adoption 15/25
Maturity 25/25
Community 23/25
Maintenance 10/25
Adoption 9/25
Maturity 24/25
Community 12/25
Stars: 38,910
Forks: 6,381
Downloads:
Commits (30d): 1497
Language: Python
License:
Stars: 92
Forks: 10
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
No risk flags

About litellm

BerriAI/litellm

Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]

This project helps developers integrate over 100 large language models (LLMs) and AI agents into their applications without worrying about API differences. It takes requests in a standardized format, routes them to various LLM providers, and handles responses, enabling easier management and deployment of AI-powered features. Developers and AI engineers building diverse applications powered by multiple LLMs are the primary users.

AI-application-development LLM-integration AI-gateway API-management developer-tooling

About lm-proxy

Nayjest/lm-proxy

OpenAI-compatible HTTP LLM proxy / gateway for multi-provider inference (Google, Anthropic, OpenAI, PyTorch). Lightweight, extensible Python/FastAPI—use as library or standalone service.

This tool helps developers and system architects manage their use of Large Language Models (LLMs) from various providers like OpenAI, Anthropic, or Google, as well as local models. It acts as a single access point, allowing you to send requests using the familiar OpenAI API format, and the proxy intelligently routes them to the correct LLM. You input your LLM requests and configuration, and it outputs responses from the chosen models, simplifying multi-provider setups.

LLM management API integration backend development AI infrastructure multi-model deployment

Scores updated daily from GitHub, PyPI, and npm data. How scores work