pydantic-ai-shields and pydantic-ai-middleware
These are complementary tools designed to be used together—middleware provides the execution framework and lifecycle hooks for intercepting agent calls, while shields provides the specific guardrail implementations (injection detection, PII filtering, cost tracking) that run within that middleware layer.
About pydantic-ai-shields
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII filtering, secret redaction, tool permissions, and async guardrails. Built on pydantic-ai's native capabilities API.
This project helps AI application developers ensure the safe, compliant, and cost-controlled operation of their Pydantic AI agents. It allows you to define rules that filter agent inputs and outputs, manage access to tools, and track expenses. The primary users are AI engineers and developers building applications powered by Pydantic AI agents.
About pydantic-ai-middleware
vstorm-co/pydantic-ai-middleware
Middleware layer for Pydantic AI — intercept, transform & guard agent calls with 7 lifecycle hooks, parallel execution, async guardrails, conditional routing, and tool-level permissions.
This project helps AI developers and engineers ensure their Pydantic AI agents operate safely, cost-effectively, and adhere to specific rules. It takes your agent's prompts and outputs, applying various checks to prevent issues like prompt injection, PII leakage, and unauthorized tool use. The output is a more robust and compliant AI agent.
Scores updated daily from GitHub, PyPI, and npm data. How scores work