amk9978/Guardian
The LLM guardian kernel
This tool helps developers and system administrators secure their applications that use Large Language Models (LLMs). It acts as a central control point, taking in user prompts destined for an LLM, and then sending them to various security checks. The system then determines if the prompt is safe or malicious before it reaches the LLM, protecting against potential misuse.
No commits in the last 6 months.
Use this if you need a fast, extensible system to filter and secure user inputs before they interact with your LLMs, especially in production environments.
Not ideal if you are a non-technical user looking for a ready-to-use application with a graphical interface for LLM security.
Stars
10
Forks
3
Language
Go
License
—
Category
Last pushed
Feb 11, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/amk9978/Guardian"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...