jalvarezz13/prompt.fail
prompt.fail explores prompt injection techniques in large language models (LLMs), providing examples to improve LLM security and robustness.
This project helps AI security researchers and developers understand how malicious prompts can manipulate large language models (LLMs). It provides a collection of examples illustrating various prompt injection techniques. The goal is to identify vulnerabilities to make AI systems more secure and robust against unintended or harmful behaviors.
Use this if you are responsible for the security, robustness, or ethical behavior of AI systems that use large language models.
Not ideal if you are looking for an automated tool to prevent prompt injection or a step-by-step guide to implement defenses.
Stars
9
Forks
1
Language
TypeScript
License
GPL-3.0
Category
Last pushed
Jan 01, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/jalvarezz13/prompt.fail"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
langgptai/LLM-Jailbreaks
LLM Jailbreaks, ChatGPT, Claude, Llama, DAN Prompts, Prompt Leaking
rpidanny/llm-prompt-templates
Empower your LLM to do more than you ever thought possible with these state-of-the-art prompt templates.
Frosy01/Krita-Ollama-Prompt-Generator
🖌️ Generate and refine prompts directly in Krita with the local LLM-powered plugin, enabling...
kyahikaru/hinglish-prompt-injection-detector
A detection system for identifying prompt injection attempts in Hinglish (Hindi-English...
kilkelly/multiprompt
Send a prompt to multiple LLMs / text models / image models simultaneously