AISecurityConsortium/AIGoat

AI Goat - Learn AI security by attacking and defending a real AI-powered e-commerce application. Built for Red Teamers, security researchers, AI enthusiasts, and students to learn about adversarial attacks on AI/LLM systems. It is strictly for educational use, and the authors disclaim responsibility for any misuse.

47
/ 100
Emerging

This project provides a deliberately vulnerable AI-powered e-commerce application to help you learn and practice attacking and defending large language models (LLMs). You feed the system prompts and observe how the LLM responds, allowing you to identify and exploit security weaknesses like prompt injection or data leakage. It's designed for security engineers, red teamers, researchers, and students to gain hands-on experience with LLM security risks.

Use this if you want hands-on experience practicing adversarial attacks against a live AI chatbot to understand LLM vulnerabilities and test defense strategies in a safe, controlled environment.

Not ideal if you are looking for a cloud-based security scanning tool for production AI infrastructure or generic AI security frameworks.

AI security training LLM red teaming penetration testing cybersecurity education vulnerability research
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 15 / 25

How are scores calculated?

Stars

25

Forks

5

Language

JavaScript

License

MIT

Last pushed

Mar 09, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/AISecurityConsortium/AIGoat"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.