PurCL/ASTRA
🥇 Amazon Nova AI Challenge Winner - ASTRA emerged victorious as the top attacking team in Amazon's global AI safety competition, defeating elite defending teams from universities worldwide in live adversarial evaluation.
This system helps AI safety engineers and security professionals thoroughly test AI coding assistants for vulnerabilities. It takes a target AI model and a domain of interest (like secure code generation) as input, then autonomously generates test cases, conducts multi-turn conversations, and identifies weaknesses. The output is a log of detected vulnerabilities and insights into the model's reasoning failures, helping to improve its safety and robustness.
No commits in the last 6 months.
Use this if you need to perform comprehensive, automated red-teaming on your AI software assistants to find and fix security vulnerabilities without relying on static benchmarks.
Not ideal if you're looking for a simple, one-off jailbreak tool or if your AI assistant is not focused on software development or security guidance.
Stars
70
Forks
4
Language
Python
License
MIT
Category
Last pushed
Aug 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/PurCL/ASTRA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
format81/TI-Mindmap-GPT
AI-powered tool designed to help producing Threat Intelligence Mindmap.
bobby-tablez/TTP-Threat-Feeds
Threat feeds designed to extract adversarial TTPs and IOCs, using: ✨AI✨
KryptSec/oasis
Open-source AI security benchmarking CLI. Measure how AI models perform offensive security tasks...
ethiack/ai4eh
AI for Ethical Hacking - Workshop
amazon-science/Cyber-Zero
Cyber-Zero: Training Cybersecurity Agents Without Runtime