rohansx/cloakpipe
Privacy middleware for LLM & RAG pipelines - consistent pseudonymization, encrypted vault, SSE streaming rehydration.
This tool helps organizations use large language models (LLMs) safely by ensuring sensitive personal information, like names or ID numbers, is never seen by the AI. You send your regular prompts with real data, and the tool automatically replaces sensitive details with placeholders before sending them to the LLM. The LLM processes the anonymized text, and the tool then restores the original information in the response, so your users get accurate and complete answers without compromising privacy. It's designed for anyone handling customer data, employee records, or other confidential information who wants to leverage AI without data leakage.
Use this if you need to use AI models with confidential information, like customer support queries, medical notes, or legal documents, and require a robust way to protect personally identifiable information (PII) from the AI itself.
Not ideal if your data contains no sensitive information that needs protection, or if you require a solution that permanently redacts PII without the ability to restore it later.
Stars
14
Forks
4
Language
Rust
License
MIT
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/rohansx/cloakpipe"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LLAMATOR-Core/llamator
Red Teaming python-framework for testing chatbots and GenAI systems.
sleeepeer/PoisonedRAG
[USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented...
kelkalot/simpleaudit
Allows to red-team your AI systems through adversarial probing. It is simple, effective, and...
JuliusHenke/autopentest
CLI enabling more autonomous black-box penetration tests using Large Language Models (LLMs)
SecurityClaw/SecurityClaw
A modular, skill-based autonomous Security Operations Center (SOC) agent that monitors...