cysecbench/dataset
Generative AI-based CyberSecurity-focused Prompt Dataset for Benchmarking Large Language Models
This project provides a specialized collection of over 12,000 prompts designed to test the security of large language models against cyberattack-related queries. It takes in specific cybersecurity attack scenarios as input and helps evaluate how easily an AI model can be tricked into generating harmful content. Cybersecurity researchers and AI security engineers can use this to benchmark and improve the safety of generative AI systems.
No commits in the last 6 months.
Use this if you need to rigorously test the resilience of an AI model against prompts designed to bypass its safety measures related to cybersecurity threats.
Not ideal if you are looking for a general-purpose dataset for training AI models or for evaluating typical conversational AI performance.
Stars
33
Forks
9
Language
Python
License
MIT
Category
Last pushed
Jan 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/cysecbench/dataset"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...