dreadnode/AIRTBench-Code
Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models
This project helps AI security researchers and red teamers automatically test the vulnerabilities of large language models (LLMs). It takes an LLM-based target and a set of AI/ML Capture The Flag (CTF) challenges, then systematically attempts to exploit the target. The output is a benchmark of the LLM's adversarial AI capabilities.
Use this if you need to objectively measure how well your LLMs can withstand sophisticated adversarial attacks in a controlled environment.
Not ideal if you're looking for a tool to manually red team or if you don't have access to the Dreadnode Strikes platform.
Stars
93
Forks
14
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/dreadnode/AIRTBench-Code"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related agents
StonyBrookNLP/appworld
🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and...
qualifire-dev/rogue
AI Agent Evaluator & Red Team Platform
microsoft/WindowsAgentArena
Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of...
future-agi/ai-evaluation
Evaluation Framework for all your AI related Workflows
agentscope-ai/OpenJudge
OpenJudge: A Unified Framework for Holistic Evaluation and Quality Rewards