use-lumina/Lumina

A lightweight observability platform for LLM applications. Track costs, latency, and quality across your AI systems with minimal overhead.

28
/ 100
Experimental

Lumina helps AI application developers monitor their Large Language Model (LLM) applications in production. It takes in real-time activity from your running LLM applications and provides insights into costs, speed, and response quality. Developers can use this to optimize their AI systems and quickly identify issues.

Use this if you are building or managing AI applications that use LLMs and need to understand their performance, spending, and reliability in a production environment.

Not ideal if you are monitoring traditional software applications that don't heavily rely on LLMs or if you only need basic uptime monitoring without detailed AI-specific metrics.

AI-application-monitoring LLM-observability AI-cost-management AI-quality-assurance production-AI-systems
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 13 / 25
Community 0 / 25

How are scores calculated?

Stars

9

Forks

Language

TypeScript

License

Apache-2.0

Last pushed

Feb 27, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/use-lumina/Lumina"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.