zhangzp9970/MIA
Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
This project helps evaluate the privacy risks of machine learning models. It takes a trained model and its confidence scores as input, then attempts to reconstruct the original training data, such as a person's face from a facial recognition model. This is for machine learning researchers and privacy engineers who need to assess model vulnerabilities.
No commits in the last 6 months.
Use this if you need to understand how vulnerable your machine learning models are to privacy breaches where sensitive training data might be reconstructed.
Not ideal if you are looking for a tool to implement robust privacy-preserving machine learning techniques, as this focuses on demonstrating vulnerabilities.
Stars
58
Forks
32
Language
Python
License
GPL-3.0
Category
Last pushed
Sep 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/zhangzp9970/MIA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model...
ebagdasa/backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct...
THUYimingLi/backdoor-learning-resources
A list of backdoor learning resources
LukasStruppek/Plug-and-Play-Attacks
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and...
VinAIResearch/Warping-based_Backdoor_Attack-release
WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)