rohansx/cloakpipe

Privacy middleware for LLM & RAG pipelines - consistent pseudonymization, encrypted vault, SSE streaming rehydration.

41
/ 100
Emerging

This tool helps organizations use large language models (LLMs) safely by ensuring sensitive personal information, like names or ID numbers, is never seen by the AI. You send your regular prompts with real data, and the tool automatically replaces sensitive details with placeholders before sending them to the LLM. The LLM processes the anonymized text, and the tool then restores the original information in the response, so your users get accurate and complete answers without compromising privacy. It's designed for anyone handling customer data, employee records, or other confidential information who wants to leverage AI without data leakage.

Use this if you need to use AI models with confidential information, like customer support queries, medical notes, or legal documents, and require a robust way to protect personally identifiable information (PII) from the AI itself.

Not ideal if your data contains no sensitive information that needs protection, or if you require a solution that permanently redacts PII without the ability to restore it later.

data-privacy compliance AI-governance information-security large-language-models
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 11 / 25
Community 15 / 25

How are scores calculated?

Stars

14

Forks

4

Language

Rust

License

MIT

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/rohansx/cloakpipe"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.