jmamda/OpenTrace
A local reverse proxy that records every LLM request/response to SQLite. No cloud, no data leaving your machine.
This tool helps developers understand and debug their applications' interactions with large language models (LLMs) by recording every request and response. It takes your application's LLM calls, captures the details, and stores them in a local database. The output is a detailed log of all LLM traffic, allowing you to review prompts, responses, costs, and latencies.
Use this if you need to log and inspect all LLM calls your application makes, want to track costs and latencies locally without sending data to a third-party service, and prefer a simple setup over complex infrastructure.
Not ideal if you need a shared, enterprise-grade observability platform with team collaboration features and advanced analytics beyond local inspection.
Stars
7
Forks
2
Language
Rust
License
MIT
Category
Last pushed
Mar 01, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jmamda/OpenTrace"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Arize-ai/openinference
OpenTelemetry Instrumentation for AI Observability
vndee/llm-sandbox
Lightweight and portable LLM sandbox runtime (code interpreter) Python library.
apache/hertzbeat
An AI-powered next-generation open source real-time observability system.
traceloop/openllmetry
Open-source observability for your GenAI or LLM application, based on OpenTelemetry
utkuozdemir/nvidia_gpu_exporter
Nvidia GPU exporter for prometheus using nvidia-smi binary