RPBLC-hq/RPBLC.DAM
The PII firewall for AI agents.
This project helps operations engineers, security professionals, and compliance officers ensure sensitive customer data or internal secrets don't accidentally get exposed to external AI models. It acts as a protective layer, taking in your application's requests (like customer emails or support tickets) before they go to a Large Language Model (LLM). It then scrubs out personally identifiable information (PII) or secrets, sending only anonymized data to the LLM, and stores the originals securely on your machine. This ensures that while your AI agents can still process the data, they never directly handle sensitive values.
Use this if you are using external AI models (like ChatGPT or Claude) and need to prevent them from directly accessing or storing sensitive customer data, proprietary information, or internal secrets.
Not ideal if your AI models are entirely self-hosted within your secure environment and do not interact with any external LLM providers.
Stars
7
Forks
—
Language
Rust
License
Apache-2.0
Category
Last pushed
Mar 01, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/RPBLC-hq/RPBLC.DAM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cxumol/promptmask
Never give AI companies your secrets! A local LLM-based privacy filter for LLM users. Seamless...
sgasser/pasteguard
AI gets the context. Not your secrets. Open-source privacy proxy for LLMs.
AgenticA5/A5-PII-Anonymizer
Desktop App with Built-In LLM for Removing Personal Identifiable Information in Documents
rpgeeganage/pII-guard
🛡️ PII Guard is an LLM-powered tool that detects and manages Personally Identifiable Information...
subodhkc/llmverify-npm
AI model health monitor for LLM apps – runtime checks for drift, hallucination risk, latency,...