HyeonjeongHa/MM-PoisonRAG

Official PyTorch implementation of "MM-PoisonRAG: Disrupting Multimodal RAG with Local and Global Poisoning Attacks"

25
/ 100
Experimental

This project helps evaluate the security vulnerabilities of multimodal AI systems that answer questions using external knowledge. It takes an existing multimodal knowledge base (text and images) and a question-answering AI model. It then generates manipulated knowledge (text and images) designed to make the AI produce incorrect or nonsensical answers, allowing you to assess how easily an AI system can be misled. This is useful for AI security researchers, red teamers, or developers building multimodal RAG systems.

Use this if you need to understand and test how vulnerable your multimodal AI models and their knowledge bases are to targeted misinformation or broad disruption.

Not ideal if you are looking for a tool to defend against these attacks, as this project focuses solely on generating and evaluating the attacks themselves.

AI-security red-teaming multimodal-AI knowledge-base-security large-language-models
No License No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 6 / 25

How are scores calculated?

Stars

12

Forks

1

Language

Python

License

Last pushed

Dec 04, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/HyeonjeongHa/MM-PoisonRAG"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.