protectai/llm-guard

The Security Toolkit for LLM Interactions

65
/ 100
Established

This tool helps ensure the safety and security of applications that use Large Language Models (LLMs) by analyzing both the user's input and the LLM's response. It takes raw text inputs and outputs from an LLM, processes them through various security checks, and then either allows the interaction or flags potential issues like harmful content, data leaks, or prompt injection attacks. This is designed for developers and MLOps engineers building and deploying LLM-powered applications.

2,660 stars. Used by 2 other packages. Available on PyPI.

Use this if you are building an application with an LLM and need to protect it from malicious inputs, prevent sensitive data leakage, and filter out undesirable or harmful LLM outputs.

Not ideal if you are simply an end-user interacting with an LLM and do not have control over its deployment or underlying security infrastructure.

LLM security MLOps application security prompt engineering data privacy
Maintenance 6 / 25
Adoption 12 / 25
Maturity 25 / 25
Community 22 / 25

How are scores calculated?

Stars

2,660

Forks

353

Language

Python

License

MIT

Last pushed

Dec 15, 2025

Commits (30d)

0

Dependencies

12

Reverse dependents

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/protectai/llm-guard"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.