promptslab/LLM-Prompt-Vulnerabilities

Prompts Methods to find the vulnerabilities in Generative Models

26
/ 100
Experimental

This helps AI safety researchers and ethicists uncover weaknesses in large language models. You input various prompts designed to trick the AI, and it helps identify instances where the model ignores instructions or reveals unintended information. The output is documentation of these vulnerabilities, assisting those focused on responsible AI development.

No commits in the last 6 months.

Use this if you are a researcher or practitioner dedicated to finding and documenting security and ethical vulnerabilities in large language models.

Not ideal if you are looking to secure a specific application or production system against prompt injection, as this is a research tool for discovering vulnerabilities rather than a defensive solution.

AI-safety LLM-security responsible-AI ethical-AI vulnerability-research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 12 / 25

How are scores calculated?

Stars

20

Forks

3

Language

License

Last pushed

Feb 23, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/promptslab/LLM-Prompt-Vulnerabilities"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.