sherifkozman/the-red-council
LLM Adversarial Security Arena — Jailbreak → Detect → Defend → Verify
This helps AI security teams proactively find and fix vulnerabilities in Large Language Models (LLMs) and AI agents. You provide an LLM or agent endpoint, and it automatically tests for security breaches like data leakage, then generates and verifies defenses. The end-user persona is an AI Security Engineer or a Red Team specialist.
Used by 1 other package. Available on PyPI.
Use this if you need to continuously test your LLMs or AI agents for vulnerabilities, automate the defense process, and ensure your AI systems comply with security policies.
Not ideal if you are looking for a general-purpose LLM development framework or a tool for routine software testing unrelated to AI security.
Stars
14
Forks
2
Language
Python
License
MIT
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Dependencies
19
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/sherifkozman/the-red-council"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
GreyDGL/PentestGPT
Automated Penetration Testing Agentic Framework Powered by Large Language Models
berylliumsec/nebula
AI-powered penetration testing assistant for automating recon, note-taking, and vulnerability analysis.
ipa-lab/hackingBuddyGPT
Helping Ethical Hackers use LLMs in 50 Lines of Code or less..
MorDavid/BruteForceAI
Advanced LLM-powered brute-force tool combining AI intelligence with automated login attacks
mbrg/power-pwn
An offensive/defense security toolset for discovery, recon and ethical assessment of AI Agents