r00tb3/RAG-Poisoning-Lab
RAG Poisoning Lab — Educational AI Security Exercise
This lab helps AI security professionals, researchers, and students understand how Retrieval-Augmented Generation (RAG) systems can be manipulated. You'll learn to inject malicious documents into a RAG's knowledge base, observe how it impacts responses, and then practice detecting and removing the poisoned content. The goal is to provide hands-on experience with real-world AI security risks.
Use this if you want to gain practical, hands-on experience with RAG data poisoning attacks, detection methods, and mitigation strategies in a controlled environment.
Not ideal if you are looking for a tool to secure a production RAG system or to perform unauthorized attacks, as this lab is strictly for educational and research purposes.
Stars
8
Forks
—
Language
Python
License
—
Category
Last pushed
Dec 07, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/r00tb3/RAG-Poisoning-Lab"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LLAMATOR-Core/llamator
Red Teaming python-framework for testing chatbots and GenAI systems.
sleeepeer/PoisonedRAG
[USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented...
kelkalot/simpleaudit
Allows to red-team your AI systems through adversarial probing. It is simple, effective, and...
JuliusHenke/autopentest
CLI enabling more autonomous black-box penetration tests using Large Language Models (LLMs)
SecurityClaw/SecurityClaw
A modular, skill-based autonomous Security Operations Center (SOC) agent that monitors...