genia-dev/vibraniumdome
LLM Security Platform.
This platform helps security teams protect applications that use AI agents by filtering potentially harmful inputs and outputs. It takes in all interactions with your AI models and flags or blocks common security threats like prompt injections or sensitive data leakage, providing a dashboard for oversight. Security engineers and operations teams deploying AI-powered applications would use this to ensure compliance and data safety.
No commits in the last 6 months.
Use this if you are a security team concerned about the risks of deploying AI agents and need a comprehensive system to monitor and control their interactions.
Not ideal if you are a single developer building a simple AI prototype and do not require enterprise-grade security oversight or analytics.
Stars
26
Forks
5
Language
Python
License
GPL-3.0
Category
Last pushed
Oct 28, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/genia-dev/vibraniumdome"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...