Scale3-Labs/langtrace
Langtrace 🔍 is an open-source, Open Telemetry based end-to-end observability tool for LLM applications, providing real-time tracing, evaluations and metrics for popular LLMs, LLM frameworks, vectorDBs and more.. Integrate using Typescript, Python. 🚀💻📊
This tool helps developers understand and improve their AI applications that use large language models (LLMs). It takes information about how your LLM application is running, including its interactions with LLM APIs, vector databases, and frameworks. In return, you get real-time traces, performance insights like latency and cost, and debugging tools to identify issues. This is for software developers and AI engineers building and maintaining LLM-powered applications.
1,184 stars.
Use this if you are building an application with LLMs and need to monitor its performance, debug issues, and analyze how it interacts with various AI services in real time.
Not ideal if you are looking for a non-technical, end-user analytics tool or if your application does not integrate with LLMs, vector databases, or LLM frameworks.
Stars
1,184
Forks
120
Language
TypeScript
License
AGPL-3.0
Category
Last pushed
Nov 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/Scale3-Labs/langtrace"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
langfuse/langfuse
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management,...
Arize-ai/phoenix
AI Observability & Evaluation
Mirascope/mirascope
The LLM Anti-Framework
Agenta-AI/agenta
The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM...
Helicone/helicone
🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓