whitecircle-ai/circle-guard-bench

First-of-its-kind AI benchmark for evaluating the protection capabilities of large language model (LLM) guard systems (guardrails and safeguards)

40
/ 100
Emerging

This tool helps AI safety teams and model developers evaluate how well their large language model (LLM) guard systems protect against harmful content and malicious attacks. It takes various LLM guard models and test prompts (both safe and unsafe) as input, and outputs a comprehensive score that assesses protection capabilities, resistance to jailbreaks, and real-time performance. This is for professionals building or deploying LLM applications who need to ensure their AI is safe, robust, and performs efficiently in production.

Use this if you need to rigorously compare and select LLM guard models based on their ability to block harmful content, resist jailbreaks, avoid false positives, and maintain performance under realistic conditions.

Not ideal if you are looking for a tool to develop or train new LLM guard models, or if your primary focus is on evaluating the general helpfulness or factual accuracy of an LLM.

AI safety content moderation LLM security responsible AI model evaluation
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 15 / 25
Community 7 / 25

How are scores calculated?

Stars

51

Forks

3

Language

Python

License

Apache-2.0

Last pushed

Mar 07, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/whitecircle-ai/circle-guard-bench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.