llm-platform-security/SecGPT
An Execution Isolation Architecture for LLM-Based Agentic Systems
When you integrate various tools and applications with your large language model (LLM) assistant, SecGPT helps protect your data and system from potential security risks. It acts as a secure container for your LLM applications, preventing them from being compromised by other apps or inadvertently exposing sensitive information. This is for developers, security engineers, or IT operations teams building and managing LLM-based agentic systems, ensuring safe interactions between these AI assistants and other tools.
107 stars. No commits in the last 6 months.
Use this if you are building LLM-powered applications and need to ensure they operate securely without risking data theft, app compromise, or unintended system changes.
Not ideal if you are a casual user of an LLM chatbot and are not involved in the development or system integration of LLM agentic systems.
Stars
107
Forks
12
Language
Python
License
—
Category
Last pushed
Jan 31, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/llm-platform-security/SecGPT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...
Pro-GenAI/Agent-Action-Guard
🛡️ Safe AI Agents through Action Classifier