cxumol/promptmask
Never give AI companies your secrets! A local LLM-based privacy filter for LLM users. Seamless integration with your existing AI tools as a Python library / OpenAI SDK replacement / API Gatetway / Web Server.
This tool acts as a privacy filter for anyone using AI models that handle sensitive information. It takes your prompts containing private data, automatically redacts the sensitive parts before sending them to a third-party AI, and then restores the original data in the AI's response. This is for professionals like HR managers, customer support agents, or researchers who need to use powerful cloud AI without exposing confidential details.
Available on PyPI.
Use this if you need to use powerful cloud-based AI services for tasks involving sensitive customer details, proprietary business information, or private medical records, but want to ensure that data remains confidential and secure.
Not ideal if your AI interactions never involve any sensitive or confidential data that you need to protect from third-party services.
Stars
94
Forks
11
Language
Python
License
MIT
Category
Last pushed
Jan 04, 2026
Commits (30d)
0
Dependencies
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/cxumol/promptmask"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
sgasser/pasteguard
AI gets the context. Not your secrets. Open-source privacy proxy for LLMs.
AgenticA5/A5-PII-Anonymizer
Desktop App with Built-In LLM for Removing Personal Identifiable Information in Documents
rpgeeganage/pII-guard
🛡️ PII Guard is an LLM-powered tool that detects and manages Personally Identifiable Information...
subodhkc/llmverify-npm
AI model health monitor for LLM apps – runtime checks for drift, hallucination risk, latency,...
QWED-AI/qwed-verification
Deterministic verification layer for LLMs | AI hallucination detection | Model output validation...