sleeepeer/PoisonedRAG

[USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models

55
/ 100
Established

This project helps evaluate the security of Retrieval-Augmented Generation (RAG) systems by simulating "poisoning" attacks on their knowledge bases. It takes an existing RAG setup and introduces corrupted data into its source documents, then measures how effectively the RAG system resists these attempts to generate incorrect or malicious information. It's intended for AI security researchers, system architects, and developers who are building or deploying RAG-based LLM applications and need to assess their robustness against data-level attacks.

242 stars.

Use this if you are a developer or researcher focused on AI security and need to rigorously test how vulnerable your RAG system is to attacks that corrupt its underlying data.

Not ideal if you are looking for a general-purpose RAG development framework or a tool to improve the accuracy of your RAG system through better data preparation.

AI Security RAG System Evaluation LLM Vulnerability Testing Data Poisoning Attacks Artificial Intelligence Auditing
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

242

Forks

38

Language

Python

License

MIT

Last pushed

Jan 27, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/sleeepeer/PoisonedRAG"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.