North-Shore-AI/LlmGuard
AI Firewall and guardrails for LLM-based Elixir applications
This project helps application developers secure their Elixir-based AI applications. It takes user inputs and outputs from a Large Language Model (LLM) and scans them for potential threats. The output is a clear indication of whether the content is safe or if a threat like a prompt injection or data leakage was detected, allowing developers to protect their applications and users.
Use this if you are an Elixir developer building an application that uses a Large Language Model and you need to prevent security vulnerabilities like prompt injections or accidental data exposure.
Not ideal if you are not building an Elixir application or if your primary concern is basic input validation rather than advanced AI-specific threat detection.
Stars
7
Forks
1
Language
Elixir
License
MIT
Category
Last pushed
Dec 29, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/North-Shore-AI/LlmGuard"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ethz-spylab/agentdojo
A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.
guardrails-ai/guardrails
Adding guardrails to large language models.
JasonLovesDoggo/caddy-defender
Caddy module to block or manipulate requests originating from AIs or cloud services trying to...
deadbits/vigil-llm
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language...
inkdust2021/VibeGuard
Uses just 1% memory while protecting 99% of your personal privacy.