protectai/llm-guard
The Security Toolkit for LLM Interactions
This tool helps ensure the safety and security of applications that use Large Language Models (LLMs) by analyzing both the user's input and the LLM's response. It takes raw text inputs and outputs from an LLM, processes them through various security checks, and then either allows the interaction or flags potential issues like harmful content, data leaks, or prompt injection attacks. This is designed for developers and MLOps engineers building and deploying LLM-powered applications.
2,660 stars. Used by 2 other packages. Available on PyPI.
Use this if you are building an application with an LLM and need to protect it from malicious inputs, prevent sensitive data leakage, and filter out undesirable or harmful LLM outputs.
Not ideal if you are simply an end-user interacting with an LLM and do not have control over its deployment or underlying security infrastructure.
Stars
2,660
Forks
353
Language
Python
License
MIT
Category
Last pushed
Dec 15, 2025
Commits (30d)
0
Dependencies
12
Reverse dependents
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/protectai/llm-guard"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...
Dicklesworthstone/acip
The Advanced Cognitive Inoculation Prompt