yunwei37/prompt-hacker-collections

prompt attack-defense, prompt Injection, reverse engineering notes and examples | 提示词对抗、破解例子与笔记

41
/ 100
Emerging

This project helps security professionals and researchers understand and defend against prompt injection attacks on Large Language Models (LLMs) like ChatGPT. It provides a curated collection of example prompts and techniques for 'jailbreaking' LLMs, along with methods for reverse engineering and defending against such attacks. The input is various prompts, and the output is a deeper understanding of LLM vulnerabilities and defenses.

295 stars. No commits in the last 6 months.

Use this if you are a security professional, researcher, or student focused on understanding and mitigating risks associated with AI safety and prompt injection in large language models.

Not ideal if you are looking for a software tool or API to directly integrate into an existing application for automated prompt defense.

AI Security LLM Safety Prompt Engineering Cybersecurity Research Ethical Hacking
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

295

Forks

32

Language

License

MIT

Last pushed

Feb 25, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/yunwei37/prompt-hacker-collections"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.