CaviraOSS/SecuPrompt

Protect your AI from Prompt Injection

39
/ 100
Emerging

This tool helps AI application developers protect their large language models (LLMs) from malicious input. It takes user prompts and any associated data (like RAG context) and scans them for threats such as 'jailbreaks,' role overrides, or data poisoning. It then either blocks the harmful input or cleans it up to preserve the user's intent, ensuring only safe information reaches your AI model. This is for anyone building or maintaining AI-powered applications, particularly those handling sensitive information or operating in public-facing roles.

Use this if you are building an AI application and need a robust, auditable way to prevent users from manipulating your LLM with malicious prompts, role overrides, or poisoned data.

Not ideal if you are looking for a general-purpose content filter that applies to all types of text, as its focus is specifically on threats to LLM integrity.

AI application development LLM security prompt engineering data integrity AI safety
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 13 / 25
Community 15 / 25

How are scores calculated?

Stars

11

Forks

4

Language

TypeScript

License

Last pushed

Nov 22, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/CaviraOSS/SecuPrompt"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.