mmalekzadeh/honest-but-curious-nets
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs (ACM CCS'21)
This project helps machine learning service providers understand a subtle privacy vulnerability. It demonstrates how a deep neural network, while accurately performing its primary classification task, can also be subtly manipulated to encode sensitive user data into its public outputs. You, as a service provider, can use this to investigate how an "honest-but-curious" model takes seemingly harmless user inputs and produces standard classification results, while secretly embedding private information within those same outputs.
No commits in the last 6 months.
Use this if you are a machine learning service provider or researcher concerned about data privacy and want to understand how sensitive user attributes can be covertly extracted from classification model outputs, even with white-box access.
Not ideal if you are looking for a general-purpose privacy-preserving machine learning library or a tool to directly defend against data leakage.
Stars
17
Forks
3
Language
Python
License
MIT
Category
Last pushed
Jan 11, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mmalekzadeh/honest-but-curious-nets"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/scaaml
SCAAML: Side Channel Attacks Assisted with Machine Learning
pralab/secml
A Python library for Secure and Explainable Machine Learning
Koukyosyumei/AIJack
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
AI-SDC/SACRO-ML
Collection of tools and resources for managing the statistical disclosure control of trained...
oss-slu/mithridatium
Mithridatium is a research-driven project aimed at detecting backdoors and data poisoning in...