veil-services/veil-go
The sensitive data firewall for AI. Detect and mask PII (Emails, Credit Cards, CPFs) locally with zero-latency before sending prompts to LLMs. Thread-safe & Production ready.
When you're building applications that use large language models (LLMs), this tool helps protect sensitive customer data like emails or credit card numbers. It takes your customer prompts, automatically detects and temporarily replaces personal information with anonymous placeholders, sends the anonymized text to the LLM, and then restores the original data in the LLM's response. This is for any developer or engineering team creating AI-powered services who needs to ensure data privacy and compliance.
Use this if you need to send user-provided text containing PII to an external LLM service but must prevent that sensitive data from ever leaving your control or being stored by the LLM.
Not ideal if your application doesn't interact with LLMs or if you need to detect and mask PII in a language other than English or Portuguese, as it currently supports a specific set of detectors.
Stars
8
Forks
—
Language
Go
License
MIT
Category
Last pushed
Dec 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/veil-services/veil-go"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cxumol/promptmask
Never give AI companies your secrets! A local LLM-based privacy filter for LLM users. Seamless...
sgasser/pasteguard
AI gets the context. Not your secrets. Open-source privacy proxy for LLMs.
AgenticA5/A5-PII-Anonymizer
Desktop App with Built-In LLM for Removing Personal Identifiable Information in Documents
QWED-AI/qwed-verification
Deterministic verification layer for LLMs | AI hallucination detection | Model output validation...
rpgeeganage/pII-guard
🛡️ PII Guard is an LLM-powered tool that detects and manages Personally Identifiable Information...