mattijsmoens/intentshield

Pre-execution intent verification for AI agents. Audits what your AI is about to do, not what it says. Zero dependencies, deterministic, hash-sealed.

49
/ 100
Emerging

This tool helps safeguard AI agents by auditing proposed actions (like running shell commands or writing files) before they execute. It takes an AI's intended action and its payload as input and determines if it's safe or dangerous, blocking harmful activities. Anyone deploying AI agents in sensitive environments, such as operations engineers, security professionals, or product managers, would find this useful for preventing malicious actions or data leaks.

Available on PyPI.

Use this if you need a robust, deterministic safety layer that prevents your AI agent from performing dangerous actions like deleting files, executing shell commands, or leaking sensitive information, even when content filters fail.

Not ideal if you need a built-in language model output parser or advanced hallucination detection, as these features have been removed to focus solely on action auditing.

AI-safety agent-security data-protection prompt-injection-prevention operational-security
No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 20 / 25
Community 13 / 25

How are scores calculated?

Stars

17

Forks

3

Language

Python

License

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/mattijsmoens/intentshield"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.