lakeraai/pint-benchmark
A benchmark for prompt injection detection systems.
This project offers a standardized way to compare how well different AI systems can spot and block malicious 'prompt injection' attacks. It takes various text inputs, including those designed to trick AI models, and evaluates if a detection system correctly identifies them as harmful or safe. AI developers, security engineers, and MLOps teams can use this to rigorously assess and improve their AI system's defenses.
166 stars.
Use this if you need an impartial and comprehensive way to test the effectiveness of prompt injection detection systems for your large language models, especially against a diverse set of real-world and challenging inputs.
Not ideal if you are a general user looking for a ready-made prompt injection solution rather than a tool for evaluating existing systems.
Stars
166
Forks
21
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Dec 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/lakeraai/pint-benchmark"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
liu00222/Open-Prompt-Injection
This repository provides a benchmark for prompt injection attacks and defenses in LLMs
R3dShad0w7/PromptMe
PromptMe is an educational project that showcases security vulnerabilities in large language...
cybozu/prompt-hardener
Prompt Hardener analyzes prompt-injection-originated risk in LLM-based agents and applications.
StavC/Here-Comes-the-AI-Worm
Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts...
mthamil107/prompt-shield
Self-learning prompt injection detection engine that gets smarter with every attack — 21...