litellm and llmgateway
These are **competitors**: both provide unified API gateways to abstract away provider-specific implementations and route requests across multiple LLM providers, with litellm being significantly more mature and feature-rich (cost tracking, guardrails, loadbalancing) while llmgateway offers a simpler alternative for basic multi-provider routing and analysis.
About litellm
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]
This project helps developers integrate over 100 large language models (LLMs) and AI agents into their applications without worrying about API differences. It takes requests in a standardized format, routes them to various LLM providers, and handles responses, enabling easier management and deployment of AI-powered features. Developers and AI engineers building diverse applications powered by multiple LLMs are the primary users.
About llmgateway
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
This helps developers who build applications using Large Language Models (LLMs) by acting as a central hub for all LLM interactions. It takes your application's LLM requests and routes them to various providers like OpenAI or Anthropic, while also managing API keys, tracking usage, and analyzing performance. Developers can use this to streamline their LLM infrastructure and gain insights into costs and model effectiveness.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work