SnailSploit/AATMF-Adversarial-AI-Threat-Modeling-Framework
AATMF | An Open Source - Adversarial AI Threat Modeling Framework
This framework helps cybersecurity professionals identify, understand, and defend against unique threats to AI systems. It provides a comprehensive catalog of AI-specific attack vectors, from prompt injection to training data poisoning, enabling you to assess vulnerabilities and develop robust defenses for your AI applications. It's designed for security analysts, red team operators, and incident responders working with AI.
Use this if you need a structured way to understand and mitigate adversarial attacks against your AI systems, much like MITRE ATT&CK for enterprise networks.
Not ideal if you are looking for a general cybersecurity framework that doesn't focus specifically on AI vulnerabilities.
Stars
7
Forks
1
Language
YARA
License
—
Category
Last pushed
Feb 22, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/SnailSploit/AATMF-Adversarial-AI-Threat-Modeling-Framework"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/PyRIT
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built...
Azure/PyRIT
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built...
arsbr/Veritensor
The Anti-Virus for AI Artifacts & RAG Firewall. A static analysis tool scanning Models and...
canada-ca/navigator
Real-time, collaborative, threat modeling tool. / Un outil collaboratif de modélisation des...
ErdemOzgen/RedAiRange
AI Red Teaming Range