gbrigandi/mcp-server-conceal
Privacy-focused MCP proxy that intelligently pseudo-anonymizes PII in real-time before data reaches external AI providers, maintaining semantic relationships for accurate analysis
When you use external AI tools like Claude or ChatGPT, this tool helps protect sensitive personal information (PII) from leaving your organization. It intercepts the data from your internal systems, intelligently replaces real PII with realistic but fake data, and then sends this anonymized data to the external AI provider. The AI receives data that maintains its structure and meaning for accurate analysis, but without revealing actual sensitive details. This is ideal for compliance officers, data privacy managers, or anyone handling customer or employee data with AI.
No commits in the last 6 months.
Use this if you need to analyze sensitive data with external AI services but must comply with privacy regulations like GDPR or HIPAA, ensuring PII is never exposed.
Not ideal if your AI analysis strictly requires raw, unaltered PII, or if you only process non-sensitive, public data.
Stars
11
Forks
3
Language
Rust
License
MIT
Category
Last pushed
Jul 24, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mcp/gbrigandi/mcp-server-conceal"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
AndrewAltimit/template-repo
Agent orchestration & security template featuring MCP tool building, agent2agent workflows,...
knowledgepa3/gia-mcp-server
MCP proxy for GIA Governance — connects Claude Desktop and Claude Code to the hosted GIA...
Chimera-Protocol/csl-core
Deterministic safety layer for AI agents. Z3-verified policy enforcement.
portofcontext/pctx
pctx is the execution layer for agentic tool calls. It auto-converts agent tools and MCP servers...
agentralabs/agentic-contract
Policy engine for AI agents — enforceable rules, risk limits, approval gates, obligation...