taladari/rag-firewall
Client-side retrieval firewall for RAG systems — blocks prompt injection and secret leaks, re-ranks stale or untrusted content, and keeps all data inside your environment.
This tool helps teams building applications using large language models (LLMs) ensure the safety and reliability of the information those models use. It acts as a gatekeeper, scanning the data retrieved to answer a user's question and blocking sensitive information like secrets or malicious instructions before it ever reaches the LLM. The result is safer, more trustworthy LLM responses for end-users in fields like finance, government, or healthcare.
No commits in the last 6 months. Available on PyPI.
Use this if you are developing AI applications and need to prevent sensitive data leaks or prompt injection attacks by filtering the information your LLM receives.
Not ideal if you need to filter the LLM's final response or are looking for a cloud-based security service.
Stars
17
Forks
2
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 04, 2025
Commits (30d)
0
Dependencies
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/taladari/rag-firewall"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LLAMATOR-Core/llamator
Red Teaming python-framework for testing chatbots and GenAI systems.
sleeepeer/PoisonedRAG
[USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented...
kelkalot/simpleaudit
Allows to red-team your AI systems through adversarial probing. It is simple, effective, and...
JuliusHenke/autopentest
CLI enabling more autonomous black-box penetration tests using Large Language Models (LLMs)
SecurityClaw/SecurityClaw
A modular, skill-based autonomous Security Operations Center (SOC) agent that monitors...