ffhibnese/Model-Inversion-Attack-ToolBox
A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.
This tool helps researchers and security professionals evaluate how vulnerable a machine learning model is to 'model inversion attacks,' where an attacker reconstructs sensitive training data. You input a pre-trained machine learning model and it outputs reconstructed data that resembles the model's original training data. It is used by privacy researchers and AI security experts to test and compare different attack and defense methods.
192 stars. No commits in the last 6 months.
Use this if you need to benchmark and compare various techniques for attacking or defending against model inversion, or if you're researching privacy risks in machine learning.
Not ideal if you're looking for a general-purpose machine learning library or a tool for data preprocessing or model training.
Stars
192
Forks
14
Language
Python
License
—
Category
Last pushed
Sep 23, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ffhibnese/Model-Inversion-Attack-ToolBox"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model...
ebagdasa/backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct...
THUYimingLi/backdoor-learning-resources
A list of backdoor learning resources
zhangzp9970/MIA
Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence...
LukasStruppek/Plug-and-Play-Attacks
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and...