Helicone/helicone
🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓
This platform helps AI engineers manage and monitor their Large Language Model (LLM) applications. It acts as a single gateway for over 100 AI models, logging all requests and responses automatically. AI engineers use it to track costs, latency, and quality, debug issues, and test prompts, getting better visibility into their LLM operations.
5,237 stars. Actively maintained with 5 commits in the last 30 days. Available on npm.
Use this if you are an AI engineer building applications with various LLMs and need a centralized way to monitor performance, debug interactions, and efficiently manage prompts and model routing.
Not ideal if you are not an AI engineer or are working with only a single, simple LLM integration and don't require advanced monitoring, routing, or prompt management features.
Stars
5,237
Forks
494
Language
TypeScript
License
Apache-2.0
Category
Last pushed
Mar 07, 2026
Commits (30d)
5
Dependencies
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/Helicone/helicone"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Recent Releases
Related tools
langfuse/langfuse
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management,...
Arize-ai/phoenix
AI Observability & Evaluation
Mirascope/mirascope
The LLM Anti-Framework
Agenta-AI/agenta
The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM...
algorithmicsuperintelligence/optillm
Optimizing inference proxy for LLMs