RPBLC-hq/RPBLC.DAM

The PII firewall for AI agents.

25
/ 100
Experimental

This project helps operations engineers, security professionals, and compliance officers ensure sensitive customer data or internal secrets don't accidentally get exposed to external AI models. It acts as a protective layer, taking in your application's requests (like customer emails or support tickets) before they go to a Large Language Model (LLM). It then scrubs out personally identifiable information (PII) or secrets, sending only anonymized data to the LLM, and stores the originals securely on your machine. This ensures that while your AI agents can still process the data, they never directly handle sensitive values.

Use this if you are using external AI models (like ChatGPT or Claude) and need to prevent them from directly accessing or storing sensitive customer data, proprietary information, or internal secrets.

Not ideal if your AI models are entirely self-hosted within your secure environment and do not interact with any external LLM providers.

data-privacy AI-governance data-security compliance PII-protection
No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 11 / 25
Community 0 / 25

How are scores calculated?

Stars

7

Forks

Language

Rust

License

Apache-2.0

Last pushed

Mar 01, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/RPBLC-hq/RPBLC.DAM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.