matank001/copilot-agents-guard
LLM-as-a-Judge security layer for Microsoft Copilot Studio agents
This project helps Copilot Studio developers add a custom security layer to their AI agents. It processes each user interaction, evaluates it for potential threats using an external webhook, and decides whether to block or allow the agent's action. This is for developers building and maintaining AI agents in Microsoft Copilot Studio who want to enhance security and control.
No commits in the last 6 months.
Use this if you are a developer building custom agents in Microsoft Copilot Studio and need to implement an external security layer to filter user interactions.
Not ideal if you are an end-user of a Copilot Studio agent or do not have the technical expertise to deploy and configure a Python Flask server.
Stars
10
Forks
1
Language
Python
License
MIT
Category
Last pushed
Sep 27, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/matank001/copilot-agents-guard"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier