PurCL/ASTRA

🥇 Amazon Nova AI Challenge Winner - ASTRA emerged victorious as the top attacking team in Amazon's global AI safety competition, defeating elite defending teams from universities worldwide in live adversarial evaluation.

34
/ 100
Emerging

This system helps AI safety engineers and security professionals thoroughly test AI coding assistants for vulnerabilities. It takes a target AI model and a domain of interest (like secure code generation) as input, then autonomously generates test cases, conducts multi-turn conversations, and identifies weaknesses. The output is a log of detected vulnerabilities and insights into the model's reasoning failures, helping to improve its safety and robustness.

No commits in the last 6 months.

Use this if you need to perform comprehensive, automated red-teaming on your AI software assistants to find and fix security vulnerabilities without relying on static benchmarks.

Not ideal if you're looking for a simple, one-off jailbreak tool or if your AI assistant is not focused on software development or security guidance.

AI safety AI security testing vulnerability assessment AI assistant red-teaming secure coding
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 9 / 25
Maturity 15 / 25
Community 8 / 25

How are scores calculated?

Stars

70

Forks

4

Language

Python

License

MIT

Last pushed

Aug 14, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/PurCL/ASTRA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.