R3dShad0w7/PromptMe
PromptMe is an educational project that showcases security vulnerabilities in large language models (LLMs) and their web integrations. It includes 10 hands-on challenges inspired by the OWASP LLM Top 10, demonstrating how these vulnerabilities can be discovered and exploited in real-world scenarios.
This project helps AI Security professionals identify and understand security flaws in large language model (LLM) applications. It provides 10 interactive challenges, based on real-world scenarios, where you actively discover and exploit vulnerabilities outlined in the OWASP LLM Top 10. You start with a vulnerable LLM application and learn to find its weaknesses, culminating in 'capturing a flag' for each challenge.
No commits in the last 6 months.
Use this if you are an AI Security professional looking for hands-on experience in identifying and mitigating LLM security vulnerabilities.
Not ideal if you are looking for a general LLM development framework or a tool for non-security-related LLM tasks.
Stars
94
Forks
34
Language
Python
License
Apache-2.0
Category
Last pushed
Jun 29, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/R3dShad0w7/PromptMe"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
liu00222/Open-Prompt-Injection
This repository provides a benchmark for prompt injection attacks and defenses in LLMs
lakeraai/pint-benchmark
A benchmark for prompt injection detection systems.
cybozu/prompt-hardener
Prompt Hardener analyzes prompt-injection-originated risk in LLM-based agents and applications.
StavC/Here-Comes-the-AI-Worm
Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts...
mthamil107/prompt-shield
Self-learning prompt injection detection engine that gets smarter with every attack — 21...