prajeesh-chavan/OpenLLM-Monitor

OpenLLM Monitor is a plug-and-play, real-time observability dashboard for monitoring and debugging LLM API calls across OpenAI, Ollama, OpenRouter, and more. Tracks tokens, latency, cost, retries, and lets you replay prompts — fully open-source and self-hostable.

38
/ 100
Emerging

This tool helps developers and AI engineers manage and optimize their use of large language models (LLMs) from various providers like OpenAI, Ollama, and OpenRouter. It takes your LLM API calls as input and provides a real-time dashboard showing performance, cost, and usage patterns. The output helps you debug issues, compare models, and keep track of spending.

No commits in the last 6 months.

Use this if you are a developer or AI engineer working with multiple LLMs and need a centralized way to monitor their performance, costs, and debug prompts across different services.

Not ideal if you are not directly managing or integrating LLMs into applications or if you only use a single LLM provider for very occasional, non-critical tasks.

LLM-operations AI-application-development API-monitoring AI-cost-management prompt-engineering
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 15 / 25
Community 15 / 25

How are scores calculated?

Stars

20

Forks

4

Language

JavaScript

License

MIT

Last pushed

Jun 26, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/prajeesh-chavan/OpenLLM-Monitor"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.