whitecircle-ai/circle-guard-bench
First-of-its-kind AI benchmark for evaluating the protection capabilities of large language model (LLM) guard systems (guardrails and safeguards)
This tool helps AI safety teams and model developers evaluate how well their large language model (LLM) guard systems protect against harmful content and malicious attacks. It takes various LLM guard models and test prompts (both safe and unsafe) as input, and outputs a comprehensive score that assesses protection capabilities, resistance to jailbreaks, and real-time performance. This is for professionals building or deploying LLM applications who need to ensure their AI is safe, robust, and performs efficiently in production.
Use this if you need to rigorously compare and select LLM guard models based on their ability to block harmful content, resist jailbreaks, avoid false positives, and maintain performance under realistic conditions.
Not ideal if you are looking for a tool to develop or train new LLM guard models, or if your primary focus is on evaluating the general helpfulness or factual accuracy of an LLM.
Stars
51
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 07, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/whitecircle-ai/circle-guard-bench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ethz-spylab/agentdojo
A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.
guardrails-ai/guardrails
Adding guardrails to large language models.
JasonLovesDoggo/caddy-defender
Caddy module to block or manipulate requests originating from AIs or cloud services trying to...
inkdust2021/VibeGuard
Uses just 1% memory while protecting 99% of your personal privacy.
deadbits/vigil-llm
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language...