MaxMLang/pytector

Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local models, API-based safeguards, and LangChain guardrails.

62
/ 100
Established

This tool helps developers and AI application builders prevent malicious instructions or data from being fed into their Large Language Models (LLMs). It takes user text input and identifies potential 'prompt injection' attempts, unsafe content, or sensitive personal information, outputting a clear indication of whether the input is safe to process. Anyone building or managing AI applications that interact with user-provided text would benefit from this, ensuring their models don't expose private data or behave unexpectedly.

Used by 1 other package. Available on PyPI.

Use this if you are developing or deploying an AI application and need to add robust security layers to prevent prompt injection, detect sensitive data, or filter out toxic content before it reaches your LLM.

Not ideal if you need a guaranteed 100% protection against all types of attacks without combining it with other security measures or expert consultation, especially for highly sensitive production systems.

AI application development LLM security data privacy content moderation application security
Maintenance 10 / 25
Adoption 8 / 25
Maturity 25 / 25
Community 19 / 25

How are scores calculated?

Stars

38

Forks

22

Language

Python

License

Apache-2.0

Last pushed

Feb 14, 2026

Commits (30d)

0

Dependencies

3

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/MaxMLang/pytector"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.