FuzzyAI and ps-fuzz

These tools are **complements** that address different aspects of LLM security testing: FuzzyAI focuses on identifying jailbreaks through adversarial fuzzing of LLM APIs, while ps-fuzz specializes in hardening system prompts through targeted testing, allowing security teams to use both in sequence to comprehensively validate their GenAI systems.

FuzzyAI
58
Established
ps-fuzz
56
Established
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 22/25
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 20/25
Stars: 1,250
Forks: 174
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: Apache-2.0
Stars: 642
Forks: 88
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
No Package No Dependents

About FuzzyAI

cyberark/FuzzyAI

A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.

This tool helps developers and security researchers proactively find and fix potential "jailbreaks" in their Large Language Model (LLM) APIs. You provide it with your LLM API endpoint and a set of test prompts, and it automatically generates many variations to discover if the model can be tricked into producing harmful or unintended responses. The output identifies specific vulnerabilities so you can make your LLM applications safer and more reliable.

LLM security API testing vulnerability research prompt engineering AI safety

About ps-fuzz

prompt-security/ps-fuzz

Make your GenAI Apps Safe & Secure :rocket: Test & harden your system prompt

This tool helps you evaluate and improve the security of your GenAI application's core instructions, known as the system prompt. It takes your system prompt as input and runs various simulated attacks to identify vulnerabilities, giving you a security evaluation. The person building or managing a GenAI application, concerned about misuse or security breaches, would use this to harden their system.

GenAI Security Prompt Engineering Application Hardening AI Risk Management LLM Safety

Scores updated daily from GitHub, PyPI, and npm data. How scores work