traceloop/hub
High-scale LLM gateway, written in Rust. OpenTelemetry-based observability included
This is a high-performance gateway that helps developers and MLOps engineers manage their Large Language Model (LLM) integrations. It takes requests for LLM operations (like chat completions or embeddings) and routes them to various LLM providers, providing a unified API. The output is a consistent way to interact with different LLMs and built-in observability for monitoring their usage and performance.
172 stars.
Use this if you are a developer or MLOps engineer building applications that need to use multiple LLMs, require high performance, and need detailed observability for all LLM interactions.
Not ideal if you are a data scientist or researcher who primarily uses a single LLM provider through its native SDK and doesn't require advanced routing or centralized observability for distributed applications.
Stars
172
Forks
31
Language
Rust
License
Apache-2.0
Category
Last pushed
Mar 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/traceloop/hub"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...