olegnazarov/rag-security-scanner
RAG/LLM Security Scanner identifies critical vulnerabilities in AI-powered applications, including chatbots, virtual assistants, and knowledge retrieval systems.
This tool helps security professionals and developers ensure the safety of AI-powered applications like chatbots and virtual assistants. It takes your RAG (Retrieval-Augmented Generation) system or LLM application as input and thoroughly tests it for common vulnerabilities, producing detailed reports on potential security flaws. Anyone responsible for the security, development, or deployment of AI applications would use this to proactively identify and fix risks.
No commits in the last 6 months.
Use this if you need to perform professional-grade security testing on your AI-powered chatbots, virtual assistants, or knowledge retrieval systems to find vulnerabilities like prompt injection or data leakage.
Not ideal if you are looking for a general cybersecurity scanner for traditional web applications or networks, as this tool is specifically designed for RAG and LLM systems.
Stars
62
Forks
10
Language
Python
License
MIT
Category
Last pushed
Sep 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/olegnazarov/rag-security-scanner"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LLAMATOR-Core/llamator
Red Teaming python-framework for testing chatbots and GenAI systems.
sleeepeer/PoisonedRAG
[USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented...
kelkalot/simpleaudit
Allows to red-team your AI systems through adversarial probing. It is simple, effective, and...
JuliusHenke/autopentest
CLI enabling more autonomous black-box penetration tests using Large Language Models (LLMs)
SecurityClaw/SecurityClaw
A modular, skill-based autonomous Security Operations Center (SOC) agent that monitors...