SnailSploit/AATMF-Adversarial-AI-Threat-Modeling-Framework

AATMF | An Open Source - Adversarial AI Threat Modeling Framework

39
/ 100
Emerging

This framework helps cybersecurity professionals identify, understand, and defend against unique threats to AI systems. It provides a comprehensive catalog of AI-specific attack vectors, from prompt injection to training data poisoning, enabling you to assess vulnerabilities and develop robust defenses for your AI applications. It's designed for security analysts, red team operators, and incident responders working with AI.

Use this if you need a structured way to understand and mitigate adversarial attacks against your AI systems, much like MITRE ATT&CK for enterprise networks.

Not ideal if you are looking for a general cybersecurity framework that doesn't focus specifically on AI vulnerabilities.

AI-security threat-modeling red-teaming incident-response AI-governance
No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

7

Forks

1

Language

YARA

License

Last pushed

Feb 22, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/SnailSploit/AATMF-Adversarial-AI-Threat-Modeling-Framework"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.