StavC/Here-Comes-the-AI-Worm
Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems
This project helps protect your GenAI-powered applications from 'AI worms' that spread malicious instructions. It takes incoming prompts and RAG-retrieved content, analyzes them, and blocks any self-replicating adversarial prompts, preventing actions like spamming, phishing, or data leaks. Security engineers, AI risk managers, and operations teams managing GenAI systems would use this to secure their platforms.
222 stars. No commits in the last 6 months.
Use this if you manage GenAI applications, especially those using Retrieval-Augmented Generation (RAG), and need to prevent automated, self-spreading attacks that compromise system integrity and user data.
Not ideal if your GenAI applications are not internet-facing, do not use RAG, or are used in closed, highly controlled environments where prompt injection risks are minimal.
Stars
222
Forks
27
Language
Jupyter Notebook
License
—
Category
Last pushed
Sep 07, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/StavC/Here-Comes-the-AI-Worm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
liu00222/Open-Prompt-Injection
This repository provides a benchmark for prompt injection attacks and defenses in LLMs
lakeraai/pint-benchmark
A benchmark for prompt injection detection systems.
R3dShad0w7/PromptMe
PromptMe is an educational project that showcases security vulnerabilities in large language...
cybozu/prompt-hardener
Prompt Hardener analyzes prompt-injection-originated risk in LLM-based agents and applications.
mthamil107/prompt-shield
Self-learning prompt injection detection engine that gets smarter with every attack — 21...