FuzzyAI and LLMFuzzer
These are complementary tools that address different aspects of LLM security testing: FuzzyAI focuses on identifying jailbreaks in LLM APIs through automated fuzzing, while LLMFuzzer provides a general-purpose fuzzing framework for LLM integration testing, allowing security researchers to use them together for comprehensive vulnerability discovery.
About FuzzyAI
cyberark/FuzzyAI
A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.
This tool helps developers and security researchers proactively find and fix potential "jailbreaks" in their Large Language Model (LLM) APIs. You provide it with your LLM API endpoint and a set of test prompts, and it automatically generates many variations to discover if the model can be tricked into producing harmful or unintended responses. The output identifies specific vulnerabilities so you can make your LLM applications safer and more reliable.
About LLMFuzzer
mnns/LLMFuzzer
🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. 🚀💥
This tool helps security enthusiasts and pentesters find vulnerabilities in applications that use Large Language Models (LLMs). It takes your LLM API endpoint and fuzzer configurations as input, then generates various malicious inputs to test the LLM's robustness and reveal security flaws in how your application interacts with the LLM. It's designed for cybersecurity researchers looking to stress-test AI systems.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work