CaviraOSS/SecuPrompt
Protect your AI from Prompt Injection
This tool helps AI application developers protect their large language models (LLMs) from malicious input. It takes user prompts and any associated data (like RAG context) and scans them for threats such as 'jailbreaks,' role overrides, or data poisoning. It then either blocks the harmful input or cleans it up to preserve the user's intent, ensuring only safe information reaches your AI model. This is for anyone building or maintaining AI-powered applications, particularly those handling sensitive information or operating in public-facing roles.
Use this if you are building an AI application and need a robust, auditable way to prevent users from manipulating your LLM with malicious prompts, role overrides, or poisoned data.
Not ideal if you are looking for a general-purpose content filter that applies to all types of text, as its focus is specifically on threats to LLM integrity.
Stars
11
Forks
4
Language
TypeScript
License
—
Category
Last pushed
Nov 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/CaviraOSS/SecuPrompt"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LLAMATOR-Core/llamator
Red Teaming python-framework for testing chatbots and GenAI systems.
sleeepeer/PoisonedRAG
[USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented...
kelkalot/simpleaudit
Allows to red-team your AI systems through adversarial probing. It is simple, effective, and...
JuliusHenke/autopentest
CLI enabling more autonomous black-box penetration tests using Large Language Models (LLMs)
SecurityClaw/SecurityClaw
A modular, skill-based autonomous Security Operations Center (SOC) agent that monitors...