yonsei-sslab/MIA
🔒 Implementation of Shokri et al(2016) "Membership Inference Attacks against Machine Learning Models"
This tool helps evaluate the privacy risks of a machine learning model by determining if specific data points were used in its training. You provide a trained machine learning model and a dataset, and it outputs metrics like accuracy, precision, and recall, along with an ROC curve, indicating how susceptible the model is to membership inference attacks. This is for machine learning developers or privacy researchers concerned about data leakage in models.
No commits in the last 6 months.
Use this if you need to assess the privacy vulnerabilities of your machine learning models, specifically their susceptibility to membership inference attacks where an attacker tries to guess if their data was part of the training set.
Not ideal if you are looking for a general-purpose privacy-enhancing technology or a tool to directly anonymize datasets.
Stars
34
Forks
9
Language
Python
License
MIT
Category
Last pushed
Aug 29, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/yonsei-sslab/MIA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/scaaml
SCAAML: Side Channel Attacks Assisted with Machine Learning
pralab/secml
A Python library for Secure and Explainable Machine Learning
Koukyosyumei/AIJack
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
AI-SDC/SACRO-ML
Collection of tools and resources for managing the statistical disclosure control of trained...
oss-slu/mithridatium
Mithridatium is a research-driven project aimed at detecting backdoors and data poisoning in...