Jitera-Labs/openguard
Safety proxy for your AI Agents
This tool helps developers working with AI agents ensure their agents receive clear, direct instructions by automatically rephrasing polite or ambiguous user prompts. It takes a user's original, softer prompt and outputs a more authoritative, "strong" version that large language models are less likely to ignore. This is designed for AI developers and engineers who want to optimize the performance and responsiveness of their AI agents.
Available on PyPI.
Use this if you are a developer finding your AI agents are too "polite" or "lazy" and need to enforce clearer, more direct interactions for better performance.
Not ideal if you need a tool for general content moderation, filtering harmful inputs, or managing AI agent safety protocols beyond prompt rephrasing.
Stars
10
Forks
—
Language
Python
License
—
Category
Last pushed
Mar 16, 2026
Commits (30d)
0
Dependencies
10
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/Jitera-Labs/openguard"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
superagent-ai/superagent
Superagent protects your AI applications against prompt injections, data leaks, and harmful...
hexitlabs/vigil
🛡️ Open-source safety guardrail for AI agent tool calls. <2ms, zero dependencies.
ankitlade12/AgentArmor
The full-stack safety layer for AI agents. Budget limits, prompt injection shields, PII...
mguard-ai/mguard
Memory defense for AI agents — stops MINJA, AgentPoison, and MemoryGraft attacks. Zero dependencies.
WardLink/TrustLayer--Security-Control-Plane-For-LLM-AI
TrustLayer is an API-first security control plane for LLM apps and AI agents. It protects...