MinChen00/UnlearningLeaks

Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)

36
/ 100
Emerging

This project helps evaluate the privacy risks associated with machine unlearning techniques, especially concerning membership inference attacks. It takes in various datasets and trained machine learning models, then performs attacks to determine if specific training data points can be identified, even after "unlearning." Data privacy researchers, machine learning engineers, and security analysts can use this to assess the effectiveness and security implications of different unlearning strategies.

No commits in the last 6 months.

Use this if you are a researcher or practitioner in machine learning and data privacy, looking to test how secure your unlearning methods are against sophisticated privacy attacks.

Not ideal if you are looking for a tool to implement machine unlearning in your production systems or to protect data from general security threats.

data-privacy machine-learning-security unlearning membership-inference privacy-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

50

Forks

6

Language

Python

License

GPL-3.0

Last pushed

May 20, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/MinChen00/UnlearningLeaks"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.