llm-guard and SemanticShield

These are competitors offering overlapping prompt injection and LLM security capabilities, though llm-guard dominates with significantly broader adoption and a more mature ecosystem of detectors and sanitizers.

llm-guard
65
Established
SemanticShield
41
Emerging
Maintenance 6/25
Adoption 12/25
Maturity 25/25
Community 22/25
Maintenance 2/25
Adoption 6/25
Maturity 25/25
Community 8/25
Stars: 2,660
Forks: 353
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 22
Forks: 2
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
Stale 6m

About llm-guard

protectai/llm-guard

The Security Toolkit for LLM Interactions

This tool helps ensure the safety and security of applications that use Large Language Models (LLMs) by analyzing both the user's input and the LLM's response. It takes raw text inputs and outputs from an LLM, processes them through various security checks, and then either allows the interaction or flags potential issues like harmful content, data leaks, or prompt injection attacks. This is designed for developers and MLOps engineers building and deploying LLM-powered applications.

LLM security MLOps application security prompt engineering data privacy

About SemanticShield

SemanticBrainCorp/SemanticShield

The Security Toolkit for managing Generative AI(especially LLMs) and Supervised Learning processes(Learning and Inference).

This tool helps businesses and organizations protect their AI systems and users by filtering inputs and outputs for harmful content, confidential data, and attempted misuse. It takes text interactions with AI models (like prompts or generated responses) and checks them against predefined security rules. The output is either the clean interaction or a flag indicating a security violation. This is ideal for DevSecOps personnel, AI ethics committees, and data privacy officers.

AI security data privacy content moderation DevSecOps risk management

Scores updated daily from GitHub, PyPI, and npm data. How scores work