mmalekzadeh/honest-but-curious-nets

Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs (ACM CCS'21)

35
/ 100
Emerging

This project helps machine learning service providers understand a subtle privacy vulnerability. It demonstrates how a deep neural network, while accurately performing its primary classification task, can also be subtly manipulated to encode sensitive user data into its public outputs. You, as a service provider, can use this to investigate how an "honest-but-curious" model takes seemingly harmless user inputs and produces standard classification results, while secretly embedding private information within those same outputs.

No commits in the last 6 months.

Use this if you are a machine learning service provider or researcher concerned about data privacy and want to understand how sensitive user attributes can be covertly extracted from classification model outputs, even with white-box access.

Not ideal if you are looking for a general-purpose privacy-preserving machine learning library or a tool to directly defend against data leakage.

data-privacy machine-learning-security privacy-vulnerability neural-network-exploitation information-security-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

17

Forks

3

Language

Python

License

MIT

Last pushed

Jan 11, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mmalekzadeh/honest-but-curious-nets"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.