taladari/rag-firewall

Client-side retrieval firewall for RAG systems — blocks prompt injection and secret leaks, re-ranks stale or untrusted content, and keeps all data inside your environment.

41
/ 100
Emerging

This tool helps teams building applications using large language models (LLMs) ensure the safety and reliability of the information those models use. It acts as a gatekeeper, scanning the data retrieved to answer a user's question and blocking sensitive information like secrets or malicious instructions before it ever reaches the LLM. The result is safer, more trustworthy LLM responses for end-users in fields like finance, government, or healthcare.

No commits in the last 6 months. Available on PyPI.

Use this if you are developing AI applications and need to prevent sensitive data leaks or prompt injection attacks by filtering the information your LLM receives.

Not ideal if you need to filter the LLM's final response or are looking for a cloud-based security service.

AI-application-security data-privacy LLM-safety information-governance risk-management
Stale 6m
Maintenance 2 / 25
Adoption 6 / 25
Maturity 24 / 25
Community 9 / 25

How are scores calculated?

Stars

17

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Sep 04, 2025

Commits (30d)

0

Dependencies

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/taladari/rag-firewall"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.