prajeesh-chavan/OpenLLM-Monitor
OpenLLM Monitor is a plug-and-play, real-time observability dashboard for monitoring and debugging LLM API calls across OpenAI, Ollama, OpenRouter, and more. Tracks tokens, latency, cost, retries, and lets you replay prompts — fully open-source and self-hostable.
This tool helps developers and AI engineers manage and optimize their use of large language models (LLMs) from various providers like OpenAI, Ollama, and OpenRouter. It takes your LLM API calls as input and provides a real-time dashboard showing performance, cost, and usage patterns. The output helps you debug issues, compare models, and keep track of spending.
No commits in the last 6 months.
Use this if you are a developer or AI engineer working with multiple LLMs and need a centralized way to monitor their performance, costs, and debug prompts across different services.
Not ideal if you are not directly managing or integrating LLMs into applications or if you only use a single LLM provider for very occasional, non-critical tasks.
Stars
20
Forks
4
Language
JavaScript
License
MIT
Category
Last pushed
Jun 26, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/prajeesh-chavan/OpenLLM-Monitor"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Arize-ai/openinference
OpenTelemetry Instrumentation for AI Observability
vndee/llm-sandbox
Lightweight and portable LLM sandbox runtime (code interpreter) Python library.
apache/hertzbeat
An AI-powered next-generation open source real-time observability system.
traceloop/openllmetry
Open-source observability for your GenAI or LLM application, based on OpenTelemetry
utkuozdemir/nvidia_gpu_exporter
Nvidia GPU exporter for prometheus using nvidia-smi binary