FuzzyAI and ps-fuzz
These tools are **complements** that address different aspects of LLM security testing: FuzzyAI focuses on identifying jailbreaks through adversarial fuzzing of LLM APIs, while ps-fuzz specializes in hardening system prompts through targeted testing, allowing security teams to use both in sequence to comprehensively validate their GenAI systems.
About FuzzyAI
cyberark/FuzzyAI
A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.
This tool helps developers and security researchers proactively find and fix potential "jailbreaks" in their Large Language Model (LLM) APIs. You provide it with your LLM API endpoint and a set of test prompts, and it automatically generates many variations to discover if the model can be tricked into producing harmful or unintended responses. The output identifies specific vulnerabilities so you can make your LLM applications safer and more reliable.
About ps-fuzz
prompt-security/ps-fuzz
Make your GenAI Apps Safe & Secure :rocket: Test & harden your system prompt
This tool helps you evaluate and improve the security of your GenAI application's core instructions, known as the system prompt. It takes your system prompt as input and runs various simulated attacks to identify vulnerabilities, giving you a security evaluation. The person building or managing a GenAI application, concerned about misuse or security breaches, would use this to harden their system.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work