xinxin7/claw-shield

The governance layer for AI agents — monitor reasoning, audit tool calls, and secure the loop through OHTTP privacy routing.

36
/ 100
Emerging

This project helps operations engineers, security analysts, and compliance officers monitor the real-time actions of their AI agents. It takes an agent's internal reasoning and proposed tool calls as input, providing a clear, auditable trace of every decision and action the agent takes, as well as an independent "judge" assessment of risky actions. This allows users to understand, control, and secure their automated AI workflows.

Use this if you need to ensure your AI agents are operating safely, ethically, and within defined boundaries, especially when they can perform sensitive actions like deleting files or making API requests.

Not ideal if you are looking for a simple API wrapper or a tool primarily focused on agent development and debugging without a strong emphasis on security or governance.

AI Governance Agent Security Compliance Auditing Automated Workflow Monitoring Operations Control
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 11 / 25
Community 9 / 25

How are scores calculated?

Stars

17

Forks

2

Language

Rust

License

MIT

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/xinxin7/claw-shield"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.