pegasi-ai/clawreins

Intervention layer with audit logs for OpenClaw agents. Browser-aware. Trajectory-aware. Human-routable.

57
/ 100
Established

This project acts as a vital safety net for computer-using AI agents, preventing them from performing destructive or unintended actions. It takes an agent's intended actions and either blocks them, pauses for human approval, or records them in an immutable audit log. Security-conscious users managing AI agents in their operations or development workflows will find this essential for maintaining control and accountability.

379 stars.

Use this if you need to ensure AI agents operate safely and only with human permission for critical tasks, or if you require detailed audit trails of all agent activities.

Not ideal if your AI agents perform only trivial, non-destructive tasks and do not require human oversight or extensive audit logging.

AI-safety Agent-operations Security-auditing Workflow-automation Compliance
No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

379

Forks

46

Language

Python

License

Apache-2.0

Last pushed

Mar 27, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/perception/pegasi-ai/clawreins"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.