use-lumina/Lumina
A lightweight observability platform for LLM applications. Track costs, latency, and quality across your AI systems with minimal overhead.
Lumina helps AI application developers monitor their Large Language Model (LLM) applications in production. It takes in real-time activity from your running LLM applications and provides insights into costs, speed, and response quality. Developers can use this to optimize their AI systems and quickly identify issues.
Use this if you are building or managing AI applications that use LLMs and need to understand their performance, spending, and reliability in a production environment.
Not ideal if you are monitoring traditional software applications that don't heavily rely on LLMs or if you only need basic uptime monitoring without detailed AI-specific metrics.
Stars
9
Forks
—
Language
TypeScript
License
Apache-2.0
Category
Last pushed
Feb 27, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/use-lumina/Lumina"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Deodat-Lawson/LaunchStack
AI-powered StartUp Accelerator Engine built with Next.js, LangChain, PostgreSQL + pgvector....
Deodat-Lawson/PDR_AI_v2
AI-powered StartUp Accelerator Engine built with Next.js, LangChain, PostgreSQL + pgvector....
ageerle/ruoyi-web
一个基于 Vue 3 的现代化 AI 聊天应用前端,支持 ChatGPT、Midjourney 等多种 AI 功能。
jargonsdev/jargons.dev
A community-driven dictionary that simplifies software, engineering and tech terms for all levels.
QuivrHQ/quivr
Opiniated RAG for integrating GenAI in your apps 🧠 Focus on your product rather than the RAG....