wangkua1/vmi

Code for "Variational Model Inversion Attacks" Wang et al., NeurIPS2021

28
/ 100
Experimental

This project helps evaluate the privacy risks of machine learning models. It takes a pre-trained image generation model (like StyleGAN) and a dataset of images (like CelebA faces) as input. The output is a set of reconstructed images that resemble individuals from the training data, even if only a general classifier output was available. This tool would be used by privacy researchers or machine learning engineers assessing the vulnerability of their models to data extraction attacks.

No commits in the last 6 months.

Use this if you need to understand how much private information about individuals in your training data could be reconstructed from your machine learning model's characteristics, specifically for image generation models.

Not ideal if you are looking to secure a model against traditional adversarial attacks or if your model does not deal with sensitive image data.

privacy-research machine-learning-security facial-recognition-privacy generative-models data-privacy-assessment
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

22

Forks

4

Language

Python

License

Last pushed

Dec 10, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/wangkua1/vmi"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.