sherifkozman/the-red-council

LLM Adversarial Security Arena — Jailbreak → Detect → Defend → Verify

48
/ 100
Emerging

This helps AI security teams proactively find and fix vulnerabilities in Large Language Models (LLMs) and AI agents. You provide an LLM or agent endpoint, and it automatically tests for security breaches like data leakage, then generates and verifies defenses. The end-user persona is an AI Security Engineer or a Red Team specialist.

Used by 1 other package. Available on PyPI.

Use this if you need to continuously test your LLMs or AI agents for vulnerabilities, automate the defense process, and ensure your AI systems comply with security policies.

Not ideal if you are looking for a general-purpose LLM development framework or a tool for routine software testing unrelated to AI security.

AI-security LLM-red-teaming vulnerability-assessment AI-agent-security prompt-injection
Maintenance 10 / 25
Adoption 6 / 25
Maturity 22 / 25
Community 10 / 25

How are scores calculated?

Stars

14

Forks

2

Language

Python

License

MIT

Last pushed

Mar 12, 2026

Commits (30d)

0

Dependencies

19

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/sherifkozman/the-red-council"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.