llm-guard and SemanticShield
These are competitors offering overlapping prompt injection and LLM security capabilities, though llm-guard dominates with significantly broader adoption and a more mature ecosystem of detectors and sanitizers.
About llm-guard
protectai/llm-guard
The Security Toolkit for LLM Interactions
This tool helps ensure the safety and security of applications that use Large Language Models (LLMs) by analyzing both the user's input and the LLM's response. It takes raw text inputs and outputs from an LLM, processes them through various security checks, and then either allows the interaction or flags potential issues like harmful content, data leaks, or prompt injection attacks. This is designed for developers and MLOps engineers building and deploying LLM-powered applications.
About SemanticShield
SemanticBrainCorp/SemanticShield
The Security Toolkit for managing Generative AI(especially LLMs) and Supervised Learning processes(Learning and Inference).
This tool helps businesses and organizations protect their AI systems and users by filtering inputs and outputs for harmful content, confidential data, and attempted misuse. It takes text interactions with AI models (like prompts or generated responses) and checks them against predefined security rules. The output is either the clean interaction or a flag indicating a security violation. This is ideal for DevSecOps personnel, AI ethics committees, and data privacy officers.
Scores updated daily from GitHub, PyPI, and npm data. How scores work