r00tb3/RAG-Poisoning-Lab

RAG Poisoning Lab — Educational AI Security Exercise

15
/ 100
Experimental

This lab helps AI security professionals, researchers, and students understand how Retrieval-Augmented Generation (RAG) systems can be manipulated. You'll learn to inject malicious documents into a RAG's knowledge base, observe how it impacts responses, and then practice detecting and removing the poisoned content. The goal is to provide hands-on experience with real-world AI security risks.

Use this if you want to gain practical, hands-on experience with RAG data poisoning attacks, detection methods, and mitigation strategies in a controlled environment.

Not ideal if you are looking for a tool to secure a production RAG system or to perform unauthorized attacks, as this lab is strictly for educational and research purposes.

AI Security RAG Systems Adversarial AI Data Poisoning Enterprise AI Risk
No License No Package No Dependents
Maintenance 6 / 25
Adoption 4 / 25
Maturity 5 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

Last pushed

Dec 07, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/r00tb3/RAG-Poisoning-Lab"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.