promptslab/LLM-Prompt-Vulnerabilities
Prompts Methods to find the vulnerabilities in Generative Models
This helps AI safety researchers and ethicists uncover weaknesses in large language models. You input various prompts designed to trick the AI, and it helps identify instances where the model ignores instructions or reveals unintended information. The output is documentation of these vulnerabilities, assisting those focused on responsible AI development.
No commits in the last 6 months.
Use this if you are a researcher or practitioner dedicated to finding and documenting security and ethical vulnerabilities in large language models.
Not ideal if you are looking to secure a specific application or production system against prompt injection, as this is a research tool for discovering vulnerabilities rather than a defensive solution.
Stars
20
Forks
3
Language
—
License
—
Category
Last pushed
Feb 23, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/promptslab/LLM-Prompt-Vulnerabilities"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...