lakeraai/pint-benchmark

A benchmark for prompt injection detection systems.

47
/ 100
Emerging

This project offers a standardized way to compare how well different AI systems can spot and block malicious 'prompt injection' attacks. It takes various text inputs, including those designed to trick AI models, and evaluates if a detection system correctly identifies them as harmful or safe. AI developers, security engineers, and MLOps teams can use this to rigorously assess and improve their AI system's defenses.

166 stars.

Use this if you need an impartial and comprehensive way to test the effectiveness of prompt injection detection systems for your large language models, especially against a diverse set of real-world and challenging inputs.

Not ideal if you are a general user looking for a ready-made prompt injection solution rather than a tool for evaluating existing systems.

AI security LLM evaluation prompt engineering MLOps AI governance
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

166

Forks

21

Language

Jupyter Notebook

License

MIT

Last pushed

Dec 16, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/lakeraai/pint-benchmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.