oss-fuzz-gen and LLMFuzzer

These are complements: OSS-Fuzz-Gen generates fuzzing inputs for traditional software using LLMs, while LLMFuzzer uses fuzzing techniques to test the LLMs themselves, addressing different layers of the testing pipeline.

oss-fuzz-gen
59
Established
LLMFuzzer
46
Emerging
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 23/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 20/25
Stars: 1,372
Forks: 208
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 347
Forks: 57
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About oss-fuzz-gen

google/oss-fuzz-gen

LLM powered fuzzing via OSS-Fuzz.

This framework helps software security teams automate and enhance their fuzz testing efforts by using Large Language Models (LLMs) to generate new fuzz targets for C, C++, Java, and Python projects. It takes existing project code and an LLM as input, then outputs new fuzzing code and detailed reports on its effectiveness, including crash discovery and code coverage. This is intended for security engineers and quality assurance professionals focused on identifying vulnerabilities in open-source and proprietary software.

software-security vulnerability-research fuzz-testing static-analysis quality-assurance

About LLMFuzzer

mnns/LLMFuzzer

🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. 🚀💥

This tool helps security enthusiasts and pentesters find vulnerabilities in applications that use Large Language Models (LLMs). It takes your LLM API endpoint and fuzzer configurations as input, then generates various malicious inputs to test the LLM's robustness and reveal security flaws in how your application interacts with the LLM. It's designed for cybersecurity researchers looking to stress-test AI systems.

AI security penetration testing vulnerability research LLM security application security

Scores updated daily from GitHub, PyPI, and npm data. How scores work