SALT-NLP/search_privacy_risk

Code for the paper "Searching Privacy Risks in Multi-Agent Systems via Simulation"

23
/ 100
Experimental

This tool helps you anticipate and mitigate privacy risks in conversational AI systems. It simulates interactions between different AI agents (a data subject, a data sender, and a data recipient) to uncover how malicious agents might extract sensitive information. The tool takes descriptions of your AI agents' roles and objectives, then outputs sophisticated attack strategies and robust defense mechanisms. Anyone responsible for the security and privacy of AI-powered agents, such as AI product managers or compliance officers, would find this useful.

No commits in the last 6 months.

Use this if you need to proactively test your LLM-based agents for privacy vulnerabilities and develop strong defenses before deployment.

Not ideal if you are looking for a simple data anonymization tool or a framework for general AI agent development without a focus on privacy risk discovery.

AI-privacy-risk LLM-security conversational-AI-safety agent-system-auditing data-leakage-prevention
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 7 / 25
Community 8 / 25

How are scores calculated?

Stars

20

Forks

2

Language

Jupyter Notebook

License

Last pushed

Oct 13, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/SALT-NLP/search_privacy_risk"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.