North-Shore-AI/LlmGuard

AI Firewall and guardrails for LLM-based Elixir applications

34
/ 100
Emerging

This project helps application developers secure their Elixir-based AI applications. It takes user inputs and outputs from a Large Language Model (LLM) and scans them for potential threats. The output is a clear indication of whether the content is safe or if a threat like a prompt injection or data leakage was detected, allowing developers to protect their applications and users.

Use this if you are an Elixir developer building an application that uses a Large Language Model and you need to prevent security vulnerabilities like prompt injections or accidental data exposure.

Not ideal if you are not building an Elixir application or if your primary concern is basic input validation rather than advanced AI-specific threat detection.

AI application development LLM security prompt injection prevention data leakage protection application guardrails
No Package No Dependents
Maintenance 6 / 25
Adoption 4 / 25
Maturity 15 / 25
Community 9 / 25

How are scores calculated?

Stars

7

Forks

1

Language

Elixir

License

MIT

Last pushed

Dec 29, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/North-Shore-AI/LlmGuard"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.