jalvarezz13/prompt.fail

prompt.fail explores prompt injection techniques in large language models (LLMs), providing examples to improve LLM security and robustness.

35
/ 100
Emerging

This project helps AI security researchers and developers understand how malicious prompts can manipulate large language models (LLMs). It provides a collection of examples illustrating various prompt injection techniques. The goal is to identify vulnerabilities to make AI systems more secure and robust against unintended or harmful behaviors.

Use this if you are responsible for the security, robustness, or ethical behavior of AI systems that use large language models.

Not ideal if you are looking for an automated tool to prevent prompt injection or a step-by-step guide to implement defenses.

AI-security LLM-safety AI-vulnerability-research prompt-engineering-security cybersecurity-research
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

9

Forks

1

Language

TypeScript

License

GPL-3.0

Last pushed

Jan 01, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/jalvarezz13/prompt.fail"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.