yonsei-sslab/MIA

🔒 Implementation of Shokri et al(2016) "Membership Inference Attacks against Machine Learning Models"

40
/ 100
Emerging

This tool helps evaluate the privacy risks of a machine learning model by determining if specific data points were used in its training. You provide a trained machine learning model and a dataset, and it outputs metrics like accuracy, precision, and recall, along with an ROC curve, indicating how susceptible the model is to membership inference attacks. This is for machine learning developers or privacy researchers concerned about data leakage in models.

No commits in the last 6 months.

Use this if you need to assess the privacy vulnerabilities of your machine learning models, specifically their susceptibility to membership inference attacks where an attacker tries to guess if their data was part of the training set.

Not ideal if you are looking for a general-purpose privacy-enhancing technology or a tool to directly anonymize datasets.

model-privacy machine-learning-security data-privacy-assessment AI-ethics privacy-auditing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

34

Forks

9

Language

Python

License

MIT

Last pushed

Aug 29, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/yonsei-sslab/MIA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.