AIGoat and ai-goat
These are **competitors** — both provide hands-on AI security training through vulnerable application environments, but one focuses on attacking/defending a realistic e-commerce system while the other uses isolated CTF challenges, requiring learners to choose between a holistic application context versus modular exploits.
About AIGoat
AISecurityConsortium/AIGoat
AI Goat - Learn AI security by attacking and defending a real AI-powered e-commerce application. Built for Red Teamers, security researchers, AI enthusiasts, and students to learn about adversarial attacks on AI/LLM systems. It is strictly for educational use, and the authors disclaim responsibility for any misuse.
This project provides a deliberately vulnerable AI-powered e-commerce application to help you learn and practice attacking and defending large language models (LLMs). You feed the system prompts and observe how the LLM responds, allowing you to identify and exploit security weaknesses like prompt injection or data leakage. It's designed for security engineers, red teamers, researchers, and students to gain hands-on experience with LLM security risks.
About ai-goat
dhammon/ai-goat
Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.
This project offers a hands-on way for security professionals and developers to learn about vulnerabilities in AI large language models (LLMs). It provides a series of local, self-contained "capture the flag" challenges where you interact with a simulated vulnerable LLM, identify security flaws, and find hidden "flags." This is ideal for security teams looking to enhance their practical skills in LLM security.
Scores updated daily from GitHub, PyPI, and npm data. How scores work