sgasser/pasteguard

AI gets the context. Not your secrets. Open-source privacy proxy for LLMs.

47
/ 100
Emerging

This project helps individuals and teams safely use AI tools like ChatGPT or coding assistants without exposing sensitive information. It takes your input (text, code, customer data) and automatically hides personal details, API keys, and other secrets before sending it to the AI. The AI then processes the context without your confidential data, and you, the user, still see the original, unmasked information. Anyone who uses AI tools for work, whether in chat, self-hosted apps, or coding environments, can benefit.

546 stars. Actively maintained with 1 commit in the last 30 days.

Use this if you want to leverage AI for tasks involving sensitive customer data, proprietary code, or internal company information without the risk of leaks.

Not ideal if your AI interactions never involve any sensitive personal data, secrets, or confidential business information.

data-privacy AI-safety compliance secure-development customer-support-AI
No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 13 / 25
Community 11 / 25

How are scores calculated?

Stars

546

Forks

18

Language

TypeScript

License

Apache-2.0

Last pushed

Mar 13, 2026

Commits (30d)

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/sgasser/pasteguard"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.