MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local models, API-based safeguards, and LangChain guardrails.
This tool helps developers and AI application builders prevent malicious instructions or data from being fed into their Large Language Models (LLMs). It takes user text input and identifies potential 'prompt injection' attempts, unsafe content, or sensitive personal information, outputting a clear indication of whether the input is safe to process. Anyone building or managing AI applications that interact with user-provided text would benefit from this, ensuring their models don't expose private data or behave unexpectedly.
Used by 1 other package. Available on PyPI.
Use this if you are developing or deploying an AI application and need to add robust security layers to prevent prompt injection, detect sensitive data, or filter out toxic content before it reaches your LLM.
Not ideal if you need a guaranteed 100% protection against all types of attacks without combining it with other security measures or expert consultation, especially for highly sensitive production systems.
Stars
38
Forks
22
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 14, 2026
Commits (30d)
0
Dependencies
3
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/MaxMLang/pytector"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
protectai/llm-guard
The Security Toolkit for LLM Interactions
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...
Dicklesworthstone/acip
The Advanced Cognitive Inoculation Prompt