takashiishida/cleanprompt
Anonymize sensitive information in text prompts before sending them to LLM applications
CleanPrompt helps you share text with AI language models without exposing private details. It takes your original text, replaces sensitive information like names, emails, and organizations with placeholders, and then gives you the anonymized version to send to an AI. After the AI responds, you can put the anonymized response back into CleanPrompt to restore the original details. This is for anyone who uses AI chat applications and wants to ensure their confidential information remains private.
No commits in the last 6 months.
Use this if you frequently interact with large language models and want to protect personal or confidential data from being stored or used in their training.
Not ideal if you need a solution that integrates directly into an existing application via an API or if your primary concern is encrypting data at rest.
Stars
20
Forks
6
Language
Python
License
MIT
Category
Last pushed
Mar 24, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/takashiishida/cleanprompt"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...