BerriAI/litellm

Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]

85
/ 100
Verified

This project helps developers integrate over 100 large language models (LLMs) and AI agents into their applications without worrying about API differences. It takes requests in a standardized format, routes them to various LLM providers, and handles responses, enabling easier management and deployment of AI-powered features. Developers and AI engineers building diverse applications powered by multiple LLMs are the primary users.

38,910 stars. Used by 178 other packages. Actively maintained with 1,497 commits in the last 30 days. Available on PyPI.

Use this if you are a developer building an application that needs to interact with multiple large language models or AI agents from different providers, and you want a simplified, unified interface.

Not ideal if you are a business user looking for a no-code solution to use LLMs, or if you only plan to use a single LLM provider without needing to switch.

AI-application-development LLM-integration AI-gateway API-management developer-tooling
Maintenance 22 / 25
Adoption 15 / 25
Maturity 25 / 25
Community 23 / 25

How are scores calculated?

Stars

38,910

Forks

6,381

Language

Python

License

Category

llm-api-gateways

Last pushed

Mar 13, 2026

Commits (30d)

1497

Dependencies

12

Reverse dependents

178

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/BerriAI/litellm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.