litellm and LLM-API-Key-Proxy
These are **competitors** — both provide OpenAI-compatible gateway abstractions across multiple LLM providers with load-balancing, but LiteLLM is the mature, battle-tested option (38k+ stars, 95M downloads) while LLM-API-Key-Proxy is an early-stage alternative that hasn't gained adoption.
About litellm
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]
This project helps developers integrate over 100 large language models (LLMs) and AI agents into their applications without worrying about API differences. It takes requests in a standardized format, routes them to various LLM providers, and handles responses, enabling easier management and deployment of AI-powered features. Developers and AI engineers building diverse applications powered by multiple LLMs are the primary users.
About LLM-API-Key-Proxy
Mirrowel/LLM-API-Key-Proxy
Universal LLM Gateway: One API, every LLM. OpenAI/Anthropic-compatible endpoints with multi-provider translation and intelligent load-balancing.
This tool helps individuals or small teams who work with various Large Language Models (LLMs) and need a simpler, more reliable way to manage them. You put in your different LLM API keys and model choices, and it gives you one single access point that works with almost any existing LLM application. This is ideal for developers, researchers, or anyone building applications that use LLMs.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work