henchiyb/breaker-ai
Breaker AI - Security check for your LLM prompts
Breaker-AI helps you proactively test the security of your AI applications by analyzing your system prompts. It takes your AI prompts as input and provides reports on potential vulnerabilities like prompt injection risks and jailbreak attempts. This tool is designed for security teams, AI researchers, and developers who are building and deploying large language models.
No commits in the last 6 months.
Use this if you need to identify and fix security weaknesses in your AI prompts before your applications are exposed to real-world attackers.
Not ideal if you are looking for a tool to secure your entire application stack, as this focuses specifically on LLM prompt security.
Stars
8
Forks
2
Language
TypeScript
License
MIT
Category
Last pushed
Jul 09, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/henchiyb/breaker-ai"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ethz-spylab/agentdojo
A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.
guardrails-ai/guardrails
Adding guardrails to large language models.
JasonLovesDoggo/caddy-defender
Caddy module to block or manipulate requests originating from AIs or cloud services trying to...
inkdust2021/VibeGuard
Uses just 1% memory while protecting 99% of your personal privacy.
deadbits/vigil-llm
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language...