superagent-ai/superagent
Superagent protects your AI applications against prompt injections, data leaks, and harmful outputs. Embed safety directly into your app and prove compliance to your customers.
This helps secure your AI applications by protecting them from malicious inputs and ensuring sensitive data isn't leaked. You feed in user messages or documents, and it helps identify and block harmful prompts, remove personal information, and analyze codebases for AI-targeted threats. It's for anyone responsible for the security and compliance of AI systems, such as AI product managers or security engineers.
6,461 stars. Actively maintained with 2 commits in the last 30 days.
Use this if you are building or managing AI applications and need to proactively defend against prompt injections, data leaks, or other AI-specific security vulnerabilities.
Not ideal if you are looking for a general cybersecurity solution that isn't specifically focused on the unique risks of AI models and applications.
Stars
6,461
Forks
955
Language
TypeScript
License
MIT
Category
Last pushed
Mar 11, 2026
Commits (30d)
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/superagent-ai/superagent"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related agents
hexitlabs/vigil
🛡️ Open-source safety guardrail for AI agent tool calls. <2ms, zero dependencies.
ankitlade12/AgentArmor
The full-stack safety layer for AI agents. Budget limits, prompt injection shields, PII...
mguard-ai/mguard
Memory defense for AI agents — stops MINJA, AgentPoison, and MemoryGraft attacks. Zero dependencies.
Jitera-Labs/openguard
Safety proxy for your AI Agents
WardLink/TrustLayer--Security-Control-Plane-For-LLM-AI
TrustLayer is an API-first security control plane for LLM apps and AI agents. It protects...