cloakllm/CloakLLM

Open-source PII cloaking + tamper-evident audit logs for LLM API calls

25
/ 100
Experimental

This tool helps organizations use large language models (LLMs) without exposing sensitive personal information (PII) like emails or credit card numbers. It takes your prompt text, automatically identifies PII, replaces it with temporary tokens, and sends the de-identified prompt to the LLM. It then keeps a secure, tamper-evident log of these actions to prove compliance. This is for data privacy officers, compliance managers, or IT security teams who need to ensure data protection when integrating LLMs.

Use this if you need to protect sensitive customer or employee data when sending prompts to LLMs and require verifiable audit trails for compliance.

Not ideal if your LLM prompts contain no personal or sensitive data that requires anonymization or compliance logging.

data-privacy regulatory-compliance LLM-governance data-anonymization audit-logging
No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 11 / 25
Community 0 / 25

How are scores calculated?

Stars

7

Forks

Language

License

MIT

Last pushed

Mar 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/cloakllm/CloakLLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.