Helicone/helicone

🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓

68
/ 100
Established

This platform helps AI engineers manage and monitor their Large Language Model (LLM) applications. It acts as a single gateway for over 100 AI models, logging all requests and responses automatically. AI engineers use it to track costs, latency, and quality, debug issues, and test prompts, getting better visibility into their LLM operations.

5,237 stars. Actively maintained with 5 commits in the last 30 days. Available on npm.

Use this if you are an AI engineer building applications with various LLMs and need a centralized way to monitor performance, debug interactions, and efficiently manage prompts and model routing.

Not ideal if you are not an AI engineer or are working with only a single, simple LLM integration and don't require advanced monitoring, routing, or prompt management features.

LLM-operations AI-application-monitoring prompt-engineering model-management AI-gateway
Maintenance 13 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 20 / 25

How are scores calculated?

Stars

5,237

Forks

494

Language

TypeScript

License

Apache-2.0

Last pushed

Mar 07, 2026

Commits (30d)

5

Dependencies

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/Helicone/helicone"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.