usagepanda/proxy
Security and compliance proxy for LLM APIs
This proxy sits between your application and Large Language Model (LLM) APIs like OpenAI, allowing you to control and secure how your applications use these services. It takes your application's requests to LLMs and processes them, outputting the LLM's response only after enforcing your organization's policies. This is designed for developers, operations engineers, or security teams responsible for safely integrating LLMs into production applications.
No commits in the last 6 months.
Use this if you are deploying LLM-powered applications and need to enforce policies around cost, security, content moderation, or compliance.
Not ideal if you are only experimenting with LLM APIs and do not have production-level security, cost, or compliance requirements.
Stars
51
Forks
9
Language
JavaScript
License
AGPL-3.0
Category
Last pushed
Jul 21, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/usagepanda/proxy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
BerriAI/litellm
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with...
vava-nessa/free-coding-models
Find, benchmark and install in CLI 158 FREE coding LLM models across 20 providers in real time
envoyproxy/ai-gateway
Manages Unified Access to Generative AI Services built on Envoy Gateway
theopenco/llmgateway
Route, manage, and analyze your LLM requests across multiple providers with a unified API interface.
Portkey-AI/gateway
A blazing fast AI Gateway with integrated guardrails. Route to 200+ LLMs, 50+ AI Guardrails with...