proventra/proventra-core
Secure your AI Agents against prompt injection attacks
This tool helps developers protect their AI agents or large language model (LLM) applications from malicious instructions. It takes user input intended for an LLM, identifies harmful 'prompt injection' attempts, and can either flag them or rewrite the input to remove the unsafe parts. Any developer building an LLM-powered application where users can provide free-form text input would benefit from this.
No commits in the last 6 months. Available on PyPI.
Use this if you are developing an AI agent or LLM application and need to filter or clean user input to prevent it from being tricked or exploited.
Not ideal if you are an end-user of an AI application and not involved in its development, or if you need to protect against general cybersecurity threats outside of prompt injection.
Stars
19
Forks
—
Language
Python
License
MIT
Category
Last pushed
Apr 24, 2025
Commits (30d)
0
Dependencies
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/proventra/proventra-core"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Nebulock-Inc/agentic-threat-hunting-framework
ATHF is a framework for agentic threat hunting - building systems that can remember, learn, and...
AgentSeal/agentseal
Security toolkit for AI agents. Scan your machine for dangerous skills and MCP configs, monitor...
cosai-oasis/secure-ai-tooling
The CoSAI Risk Map is a framework for identifying, analyzing, and mitigating security risks in...
HeadyZhang/agent-audit
Static security scanner for LLM agents — prompt injection, MCP config auditing, taint analysis....
LucidAkshay/kavach
Tactical AI Workspace Monitor & EDR