xinxin7/claw-shield
The governance layer for AI agents — monitor reasoning, audit tool calls, and secure the loop through OHTTP privacy routing.
This project helps operations engineers, security analysts, and compliance officers monitor the real-time actions of their AI agents. It takes an agent's internal reasoning and proposed tool calls as input, providing a clear, auditable trace of every decision and action the agent takes, as well as an independent "judge" assessment of risky actions. This allows users to understand, control, and secure their automated AI workflows.
Use this if you need to ensure your AI agents are operating safely, ethically, and within defined boundaries, especially when they can perform sensitive actions like deleting files or making API requests.
Not ideal if you are looking for a simple API wrapper or a tool primarily focused on agent development and debugging without a strong emphasis on security or governance.
Stars
17
Forks
2
Language
Rust
License
MIT
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/xinxin7/claw-shield"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
23blocks-OS/ai-maestro
AI Agent Orchestrator with Skills System - Give AI Agents superpowers: memory search, code graph...
ImKKingshuk/LockKnife
LockKnife: The Ultimate Android Security Research Tool. A unified TUI workspace and headless CLI...
conorluddy/ios-simulator-skill
An IOS Simulator Skill for ClaudeCode. Use it to optimise Claude's ability to build, run and...
backbay-labs/clawdstrike
Runtime security enforcement and threat hunting engine for autonomous AI fleets. Build Swarm...
FlineDev/ContextKit
Claude Code context engineering & planning system for individual AI development workflows