dronefreak/PromptScreen
Protect your LLMs from prompt injection and jailbreak attacks. Easy-to-use Python package with multiple detection methods, CLI tool, and FastAPI integration.
This tool helps safeguard large language model (LLM) applications by detecting and preventing malicious prompts that try to bypass safety measures or inject harmful instructions. It takes user input prompts as an input and determines if they are safe or if they constitute an attack, such as prompt injection or jailbreaking. This is for developers building and deploying LLM-powered applications who need to ensure the security and integrity of their AI systems.
Available on PyPI.
Use this if you are developing an application that uses large language models and need to protect it from users attempting to manipulate or exploit your AI with malicious prompts.
Not ideal if you are looking for a general content moderation tool or a solution for non-LLM specific security threats.
Stars
9
Forks
4
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 04, 2026
Commits (30d)
0
Dependencies
8
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/dronefreak/PromptScreen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
anmolksachan/LLMInjector
Burp Suite Extension for LLM Prompt Injection Testing
rv427447/Cognitive-Hijacking-in-Long-Context-LLMs
🧠Explore cognitive hijacking in long-context LLMs, revealing vulnerabilities in prompt...
moketchups/permanently-jailbroken
We asked 6 AIs about their own programming. All 6 said jailbreaking will never be fixed. Run it...
AhsanAyub/malicious-prompt-detection
Detection of malicious prompts used to exploit large language models (LLMs) by leveraging...
AdityaBhatt3010/When-LinkedIn-Gmail-Obey-Hidden-AI-Prompts-Lessons-in-Indirect-Prompt-Injection
A real-world look at how hidden instructions in profiles and emails trick AI into unexpected...