epappas/llmtrace

Zero-code LLM security & observability proxy. Real-time prompt injection detection, PII scanning, and cost control for OpenAI-compatible APIs. Built in Rust.

34
/ 100
Emerging

This tool helps developers and product managers using large language models (LLMs) to ensure their applications are secure and cost-effective. It acts as an invisible layer between your application and LLM providers like OpenAI, analyzing requests and responses for issues. You feed it your application's LLM interactions, and it outputs real-time alerts on prompt injection, PII leaks, and cost overruns, along with performance insights.

Use this if you are building an application with OpenAI-compatible LLMs and need to monitor for security vulnerabilities, control costs, and track performance without changing your existing code.

Not ideal if your application does not interact with OpenAI-compatible LLMs or if you only need basic logging without advanced security and cost control features.

LLM application development AI security API cost management production monitoring data privacy
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 11 / 25
Community 3 / 25

How are scores calculated?

Stars

35

Forks

1

Language

Rust

License

MIT

Last pushed

Mar 11, 2026

Monthly downloads

18

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/epappas/llmtrace"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.