FuzzyAI and TurboFuzzLLM
These are complementary tools that address different fuzzing strategies—FuzzyAI uses LLM-guided approaches to generate jailbreak attempts, while TurboFuzzLLM accelerates mutation-based fuzzing—making them suitable for combined use in comprehensive LLM security testing.
About FuzzyAI
cyberark/FuzzyAI
A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.
This tool helps developers and security researchers proactively find and fix potential "jailbreaks" in their Large Language Model (LLM) APIs. You provide it with your LLM API endpoint and a set of test prompts, and it automatically generates many variations to discover if the model can be tricked into producing harmful or unintended responses. The output identifies specific vulnerabilities so you can make your LLM applications safer and more reliable.
About TurboFuzzLLM
amazon-science/TurboFuzzLLM
TurboFuzzLLM: Turbocharging Mutation-based Fuzzing for Effectively Jailbreaking Large Language Models in Practice
This tool helps AI safety researchers and red teamers automatically find weaknesses in Large Language Models (LLMs). It takes a list of potentially harmful questions and, through an iterative process, generates new, subtly modified prompts that can 'jailbreak' the LLM. The output is a collection of effective adversarial prompt templates that show how the model can be tricked into generating undesirable responses, allowing for improved AI safeguards.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work