jzOcb/agent-guardrails
Mechanical enforcement tools to prevent AI agents from bypassing established project standards.
This project helps engineering and MLOps teams prevent AI coding agents from bypassing established development standards and introducing critical errors. It takes your project's code and existing rules, then enforces them through pre-commit hooks and validation scripts. This ensures the AI agent adheres to proper imports, avoids duplicating logic, and doesn't expose sensitive information.
Use this if you are using AI coding agents (like Claude Code, Clawdbot, or Cursor) and need to mechanically enforce coding standards, prevent security vulnerabilities, and ensure agent-generated code integrates correctly into your existing systems.
Not ideal if your primary concern is managing natural language prompts or if you don't use AI agents for code generation and modification.
Stars
10
Forks
—
Language
Shell
License
MIT
Category
Last pushed
Feb 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/jzOcb/agent-guardrails"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier