zealscott/MIA

Source code for Cascading and Proxy Membership Inference Attacks. NDSS 2026.

14
/ 100
Experimental

This project helps evaluate the privacy risks of machine learning models by determining if specific data points were used in their training. It takes a deployed machine learning model and some data as input, then outputs a judgment on whether that data was part of the model's training set. This is primarily for privacy researchers or security auditors who need to assess model vulnerability to membership inference attacks.

No commits in the last 6 months.

Use this if you need to test the privacy robustness of a machine learning model against advanced membership inference attacks without requiring access to its original training data distribution.

Not ideal if you are looking for a tool to enhance model privacy or anonymize datasets, as this focuses solely on identifying privacy vulnerabilities.

AI-security data-privacy model-auditing machine-learning-vulnerability
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 7 / 25
Community 0 / 25

How are scores calculated?

Stars

10

Forks

Language

Python

License

Last pushed

Aug 17, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/zealscott/MIA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.