SemanticBrainCorp/SemanticShield
The Security Toolkit for managing Generative AI(especially LLMs) and Supervised Learning processes(Learning and Inference).
This tool helps businesses and organizations protect their AI systems and users by filtering inputs and outputs for harmful content, confidential data, and attempted misuse. It takes text interactions with AI models (like prompts or generated responses) and checks them against predefined security rules. The output is either the clean interaction or a flag indicating a security violation. This is ideal for DevSecOps personnel, AI ethics committees, and data privacy officers.
No commits in the last 6 months. Available on PyPI.
Use this if you need to ensure that interactions with your AI systems are safe, compliant, and free from sensitive data leaks or malicious attacks.
Not ideal if your primary concern is improving AI model performance or debugging AI system failures unrelated to security or content moderation.
Stars
22
Forks
2
Language
Python
License
MIT
Category
Last pushed
Jun 25, 2025
Commits (30d)
0
Dependencies
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/SemanticBrainCorp/SemanticShield"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...