cloakllm/CloakLLM
Open-source PII cloaking + tamper-evident audit logs for LLM API calls
This tool helps organizations use large language models (LLMs) without exposing sensitive personal information (PII) like emails or credit card numbers. It takes your prompt text, automatically identifies PII, replaces it with temporary tokens, and sends the de-identified prompt to the LLM. It then keeps a secure, tamper-evident log of these actions to prove compliance. This is for data privacy officers, compliance managers, or IT security teams who need to ensure data protection when integrating LLMs.
Use this if you need to protect sensitive customer or employee data when sending prompts to LLMs and require verifiable audit trails for compliance.
Not ideal if your LLM prompts contain no personal or sensitive data that requires anonymization or compliance logging.
Stars
7
Forks
—
Language
—
License
MIT
Category
Last pushed
Mar 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/cloakllm/CloakLLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cxumol/promptmask
Never give AI companies your secrets! A local LLM-based privacy filter for LLM users. Seamless...
sgasser/pasteguard
AI gets the context. Not your secrets. Open-source privacy proxy for LLMs.
AgenticA5/A5-PII-Anonymizer
Desktop App with Built-In LLM for Removing Personal Identifiable Information in Documents
QWED-AI/qwed-verification
Deterministic verification layer for LLMs | AI hallucination detection | Model output validation...
rpgeeganage/pII-guard
🛡️ PII Guard is an LLM-powered tool that detects and manages Personally Identifiable Information...