prompt-security/RAG_Poisoning_POC
Stealthy Prompt Injection and Poisoning in RAG Systems via Vector Database Embeddings
This project helps security professionals and developers understand how malicious instructions can be hidden within the data fed into AI systems that use Retrieval Augmented Generation (RAG). It shows how to inject harmful commands into document embeddings stored in a vector database, which can then manipulate an AI model's behavior. Anyone responsible for the security and integrity of AI applications using RAG systems would find this tool useful.
Use this if you are building or securing AI applications with RAG and need to identify and demonstrate a critical, stealthy prompt injection and data poisoning vulnerability through vector database embeddings.
Not ideal if you are looking for a general-purpose AI development framework or a tool to build secure RAG systems directly, as this is a proof-of-concept for demonstrating an attack.
Stars
12
Forks
2
Language
Python
License
—
Category
Last pushed
Nov 24, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/prompt-security/RAG_Poisoning_POC"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LLAMATOR-Core/llamator
Red Teaming python-framework for testing chatbots and GenAI systems.
sleeepeer/PoisonedRAG
[USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented...
kelkalot/simpleaudit
Allows to red-team your AI systems through adversarial probing. It is simple, effective, and...
JuliusHenke/autopentest
CLI enabling more autonomous black-box penetration tests using Large Language Models (LLMs)
SecurityClaw/SecurityClaw
A modular, skill-based autonomous Security Operations Center (SOC) agent that monitors...