dropbox/llm-security

Dropbox LLM Security research code and results

42
/ 100
Emerging

This project helps security researchers and AI safety engineers understand and demonstrate 'prompt injection' attacks against large language models like ChatGPT. By crafting specific inputs with repeated tokens, you can observe how an LLM's intended behavior can be overridden, potentially leading to unintended responses or even data leakage. It's designed for those who work on securing AI applications and need to validate system robustness.

256 stars. No commits in the last 6 months.

Use this if you are an AI security researcher or engineer responsible for identifying and mitigating vulnerabilities in LLM-powered applications.

Not ideal if you are looking for a general-purpose LLM development tool or a solution for common application-level prompt engineering.

AI-security LLM-vulnerability-research prompt-injection AI-safety red-teaming
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

256

Forks

29

Language

Python

License

Apache-2.0

Last pushed

May 21, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/dropbox/llm-security"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.