cyberark/FuzzyAI

A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.

58
/ 100
Established

This tool helps developers and security researchers proactively find and fix potential "jailbreaks" in their Large Language Model (LLM) APIs. You provide it with your LLM API endpoint and a set of test prompts, and it automatically generates many variations to discover if the model can be tricked into producing harmful or unintended responses. The output identifies specific vulnerabilities so you can make your LLM applications safer and more reliable.

1,250 stars.

Use this if you are a developer or security researcher building or integrating LLMs and need to ensure your API endpoints are robust against malicious or unintended prompt injections.

Not ideal if you are an end-user simply interacting with an existing LLM application and do not have access to its API for testing.

LLM security API testing vulnerability research prompt engineering AI safety
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

1,250

Forks

174

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Feb 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/cyberark/FuzzyAI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.