VissaMoutafis/Membership-Inference-Research
Bachelor's Thesis on Membership Inference Attacks
This research provides a framework and resources for understanding Membership Inference Attacks (MIAs) against machine learning models. It helps assess if a machine learning model, given its training data, could expose private information about individuals whose data was used in its creation. It takes an existing ML model and data points, and outputs whether those data points were likely part of the model's training set, revealing potential privacy breaches. This is useful for privacy researchers, data protection officers, and ML security engineers.
No commits in the last 6 months.
Use this if you need to evaluate the privacy vulnerabilities of a machine learning model by determining if it leaks information about its training data members.
Not ideal if you are looking for a highly optimized, production-ready tool for implementing defenses against membership inference attacks.
Stars
11
Forks
1
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Nov 11, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/VissaMoutafis/Membership-Inference-Research"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/scaaml
SCAAML: Side Channel Attacks Assisted with Machine Learning
pralab/secml
A Python library for Secure and Explainable Machine Learning
Koukyosyumei/AIJack
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
AI-SDC/SACRO-ML
Collection of tools and resources for managing the statistical disclosure control of trained...
liuyugeng/ML-Doctor
Code for ML Doctor