invariantlabs-ai/invariant-gateway
LLM proxy to observe and debug what your AI agents are doing.
This project acts as an intermediary, sitting between your AI agents and their large language model providers like OpenAI or Anthropic. It automatically captures and stores every interaction your agents have, providing a detailed log of their activities. This allows you to see exactly what your AI agents are doing, making it easier to understand and fix any issues. It's designed for developers, AI engineers, and teams building and managing AI agent systems.
Use this if you are developing or deploying AI agents and need to monitor, debug, and understand their interactions with LLMs, including tool use and streaming responses.
Not ideal if you are a casual user of an AI agent and don't need to inspect its underlying LLM calls or if you are not developing AI agents yourself.
Stars
68
Forks
9
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/invariantlabs-ai/invariant-gateway"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier