sgasser/pasteguard
AI gets the context. Not your secrets. Open-source privacy proxy for LLMs.
This project helps individuals and teams safely use AI tools like ChatGPT or coding assistants without exposing sensitive information. It takes your input (text, code, customer data) and automatically hides personal details, API keys, and other secrets before sending it to the AI. The AI then processes the context without your confidential data, and you, the user, still see the original, unmasked information. Anyone who uses AI tools for work, whether in chat, self-hosted apps, or coding environments, can benefit.
546 stars. Actively maintained with 1 commit in the last 30 days.
Use this if you want to leverage AI for tasks involving sensitive customer data, proprietary code, or internal company information without the risk of leaks.
Not ideal if your AI interactions never involve any sensitive personal data, secrets, or confidential business information.
Stars
546
Forks
18
Language
TypeScript
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/sgasser/pasteguard"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cxumol/promptmask
Never give AI companies your secrets! A local LLM-based privacy filter for LLM users. Seamless...
AgenticA5/A5-PII-Anonymizer
Desktop App with Built-In LLM for Removing Personal Identifiable Information in Documents
rpgeeganage/pII-guard
🛡️ PII Guard is an LLM-powered tool that detects and manages Personally Identifiable Information...
subodhkc/llmverify-npm
AI model health monitor for LLM apps – runtime checks for drift, hallucination risk, latency,...
QWED-AI/qwed-verification
Deterministic verification layer for LLMs | AI hallucination detection | Model output validation...