SemanticBrainCorp/SemanticShield

The Security Toolkit for managing Generative AI(especially LLMs) and Supervised Learning processes(Learning and Inference).

41
/ 100
Emerging

This tool helps businesses and organizations protect their AI systems and users by filtering inputs and outputs for harmful content, confidential data, and attempted misuse. It takes text interactions with AI models (like prompts or generated responses) and checks them against predefined security rules. The output is either the clean interaction or a flag indicating a security violation. This is ideal for DevSecOps personnel, AI ethics committees, and data privacy officers.

No commits in the last 6 months. Available on PyPI.

Use this if you need to ensure that interactions with your AI systems are safe, compliant, and free from sensitive data leaks or malicious attacks.

Not ideal if your primary concern is improving AI model performance or debugging AI system failures unrelated to security or content moderation.

AI security data privacy content moderation DevSecOps risk management
Stale 6m
Maintenance 2 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 8 / 25

How are scores calculated?

Stars

22

Forks

2

Language

Python

License

MIT

Last pushed

Jun 25, 2025

Commits (30d)

0

Dependencies

7

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/SemanticBrainCorp/SemanticShield"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.