FuzzyAI and LLMFuzzer

These are complementary tools that address different aspects of LLM security testing: FuzzyAI focuses on identifying jailbreaks in LLM APIs through automated fuzzing, while LLMFuzzer provides a general-purpose fuzzing framework for LLM integration testing, allowing security researchers to use them together for comprehensive vulnerability discovery.

FuzzyAI
58
Established
LLMFuzzer
46
Emerging
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 22/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 20/25
Stars: 1,250
Forks: 174
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: Apache-2.0
Stars: 347
Forks: 57
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About FuzzyAI

cyberark/FuzzyAI

A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.

This tool helps developers and security researchers proactively find and fix potential "jailbreaks" in their Large Language Model (LLM) APIs. You provide it with your LLM API endpoint and a set of test prompts, and it automatically generates many variations to discover if the model can be tricked into producing harmful or unintended responses. The output identifies specific vulnerabilities so you can make your LLM applications safer and more reliable.

LLM security API testing vulnerability research prompt engineering AI safety

About LLMFuzzer

mnns/LLMFuzzer

🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. 🚀💥

This tool helps security enthusiasts and pentesters find vulnerabilities in applications that use Large Language Models (LLMs). It takes your LLM API endpoint and fuzzer configurations as input, then generates various malicious inputs to test the LLM's robustness and reveal security flaws in how your application interacts with the LLM. It's designed for cybersecurity researchers looking to stress-test AI systems.

AI security penetration testing vulnerability research LLM security application security

Scores updated daily from GitHub, PyPI, and npm data. How scores work