VissaMoutafis/Membership-Inference-Research

Bachelor's Thesis on Membership Inference Attacks

28
/ 100
Experimental

This research provides a framework and resources for understanding Membership Inference Attacks (MIAs) against machine learning models. It helps assess if a machine learning model, given its training data, could expose private information about individuals whose data was used in its creation. It takes an existing ML model and data points, and outputs whether those data points were likely part of the model's training set, revealing potential privacy breaches. This is useful for privacy researchers, data protection officers, and ML security engineers.

No commits in the last 6 months.

Use this if you need to evaluate the privacy vulnerabilities of a machine learning model by determining if it leaks information about its training data members.

Not ideal if you are looking for a highly optimized, production-ready tool for implementing defenses against membership inference attacks.

data-privacy machine-learning-security privacy-assessment data-protection model-auditing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

11

Forks

1

Language

Jupyter Notebook

License

MIT

Last pushed

Nov 11, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/VissaMoutafis/Membership-Inference-Research"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.