pydantic-ai-shields and pydantic-ai-middleware

These are complementary tools designed to be used together—middleware provides the execution framework and lifecycle hooks for intercepting agent calls, while shields provides the specific guardrail implementations (injection detection, PII filtering, cost tracking) that run within that middleware layer.

pydantic-ai-shields
54
Established
Maintenance 13/25
Adoption 7/25
Maturity 22/25
Community 12/25
Maintenance 10/25
Adoption 7/25
Maturity 20/25
Community 11/25
Stars: 27
Forks: 4
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 22
Forks: 3
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
No risk flags

About pydantic-ai-shields

vstorm-co/pydantic-ai-shields

Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII filtering, secret redaction, tool permissions, and async guardrails. Built on pydantic-ai's native capabilities API.

This project helps AI application developers ensure the safe, compliant, and cost-controlled operation of their Pydantic AI agents. It allows you to define rules that filter agent inputs and outputs, manage access to tools, and track expenses. The primary users are AI engineers and developers building applications powered by Pydantic AI agents.

AI development LLM application security AI ethics cost management agent safety

About pydantic-ai-middleware

vstorm-co/pydantic-ai-middleware

Middleware layer for Pydantic AI — intercept, transform & guard agent calls with 7 lifecycle hooks, parallel execution, async guardrails, conditional routing, and tool-level permissions.

This project helps AI developers and engineers ensure their Pydantic AI agents operate safely, cost-effectively, and adhere to specific rules. It takes your agent's prompts and outputs, applying various checks to prevent issues like prompt injection, PII leakage, and unauthorized tool use. The output is a more robust and compliant AI agent.

AI-safety prompt-engineering AI-governance cost-management LLM-security

Scores updated daily from GitHub, PyPI, and npm data. How scores work