oss-fuzz-gen and ps-fuzz
These are complements: OSS-Fuzz-Gen uses LLMs to generate fuzzing test cases for finding vulnerabilities in open-source software, while PS-Fuzz uses fuzzing techniques to test the robustness of LLM system prompts themselves—addressing security from opposite directions in the AI+security stack.
About oss-fuzz-gen
google/oss-fuzz-gen
LLM powered fuzzing via OSS-Fuzz.
This framework helps software security teams automate and enhance their fuzz testing efforts by using Large Language Models (LLMs) to generate new fuzz targets for C, C++, Java, and Python projects. It takes existing project code and an LLM as input, then outputs new fuzzing code and detailed reports on its effectiveness, including crash discovery and code coverage. This is intended for security engineers and quality assurance professionals focused on identifying vulnerabilities in open-source and proprietary software.
About ps-fuzz
prompt-security/ps-fuzz
Make your GenAI Apps Safe & Secure :rocket: Test & harden your system prompt
This tool helps you evaluate and improve the security of your GenAI application's core instructions, known as the system prompt. It takes your system prompt as input and runs various simulated attacks to identify vulnerabilities, giving you a security evaluation. The person building or managing a GenAI application, concerned about misuse or security breaches, would use this to harden their system.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work