henchiyb/breaker-ai

Breaker AI - Security check for your LLM prompts

34
/ 100
Emerging

Breaker-AI helps you proactively test the security of your AI applications by analyzing your system prompts. It takes your AI prompts as input and provides reports on potential vulnerabilities like prompt injection risks and jailbreak attempts. This tool is designed for security teams, AI researchers, and developers who are building and deploying large language models.

No commits in the last 6 months.

Use this if you need to identify and fix security weaknesses in your AI prompts before your applications are exposed to real-world attackers.

Not ideal if you are looking for a tool to secure your entire application stack, as this focuses specifically on LLM prompt security.

AI-security LLM-vulnerability-testing prompt-engineering application-security AI-safety
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 4 / 25
Maturity 15 / 25
Community 13 / 25

How are scores calculated?

Stars

8

Forks

2

Language

TypeScript

License

MIT

Last pushed

Jul 09, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/henchiyb/breaker-ai"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.