sleeepeer/PoisonedRAG
[USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models
This project helps evaluate the security of Retrieval-Augmented Generation (RAG) systems by simulating "poisoning" attacks on their knowledge bases. It takes an existing RAG setup and introduces corrupted data into its source documents, then measures how effectively the RAG system resists these attempts to generate incorrect or malicious information. It's intended for AI security researchers, system architects, and developers who are building or deploying RAG-based LLM applications and need to assess their robustness against data-level attacks.
242 stars.
Use this if you are a developer or researcher focused on AI security and need to rigorously test how vulnerable your RAG system is to attacks that corrupt its underlying data.
Not ideal if you are looking for a general-purpose RAG development framework or a tool to improve the accuracy of your RAG system through better data preparation.
Stars
242
Forks
38
Language
Python
License
MIT
Category
Last pushed
Jan 27, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/sleeepeer/PoisonedRAG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
LLAMATOR-Core/llamator
Red Teaming python-framework for testing chatbots and GenAI systems.
kelkalot/simpleaudit
Allows to red-team your AI systems through adversarial probing. It is simple, effective, and...
JuliusHenke/autopentest
CLI enabling more autonomous black-box penetration tests using Large Language Models (LLMs)
SecurityClaw/SecurityClaw
A modular, skill-based autonomous Security Operations Center (SOC) agent that monitors...
AI-secure/AgentPoison
[NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or...