ffhibnese/Model-Inversion-Attack-ToolBox

A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.

32
/ 100
Emerging

This tool helps researchers and security professionals evaluate how vulnerable a machine learning model is to 'model inversion attacks,' where an attacker reconstructs sensitive training data. You input a pre-trained machine learning model and it outputs reconstructed data that resembles the model's original training data. It is used by privacy researchers and AI security experts to test and compare different attack and defense methods.

192 stars. No commits in the last 6 months.

Use this if you need to benchmark and compare various techniques for attacking or defending against model inversion, or if you're researching privacy risks in machine learning.

Not ideal if you're looking for a general-purpose machine learning library or a tool for data preprocessing or model training.

AI privacy research machine learning security data reconstruction attacks model vulnerability assessment adversarial machine learning
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 12 / 25

How are scores calculated?

Stars

192

Forks

14

Language

Python

License

Last pushed

Sep 23, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ffhibnese/Model-Inversion-Attack-ToolBox"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.