Membership Inference Attacks ML Frameworks

Tools and implementations for detecting whether specific data points were used in model training, including attack methods, defenses, and privacy analysis frameworks. Does NOT include general privacy-preserving ML, differential privacy libraries, or other data poisoning/adversarial attacks.

There are 48 membership inference attacks frameworks tracked. 1 score above 70 (verified tier). The highest-rated is google/scaaml at 70/100 with 193 stars.

Get all 48 projects as JSON

curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=ml-frameworks&subcategory=membership-inference-attacks&limit=20"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

# Framework Score Tier
1 google/scaaml

SCAAML: Side Channel Attacks Assisted with Machine Learning

70
Verified
2 pralab/secml

A Python library for Secure and Explainable Machine Learning

54
Established
3 Koukyosyumei/AIJack

Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)

53
Established
4 AI-SDC/SACRO-ML

Collection of tools and resources for managing the statistical disclosure...

50
Established
5 liuyugeng/ML-Doctor

Code for ML Doctor

45
Emerging
6 oss-slu/mithridatium

Mithridatium is a research-driven project aimed at detecting backdoors and...

45
Emerging
7 matteonerini/pin-side-channel-attacks

Machine Learning for PIN Side-Channel Attacks Based on Smartphone Motion Sensors

45
Emerging
8 ArtLabss/open-data-anonymizer

Python Data Anonymization & Masking Library For Data Science Tasks

43
Emerging
9 microsoft/responsible-ai-toolbox-privacy

A library for statistically estimating the privacy of ML pipelines from...

42
Emerging
10 yonsei-sslab/MIA

🔒 Implementation of Shokri et al(2016) "Membership Inference Attacks against...

40
Emerging
11 stratosphereips/awesome-ml-privacy-attacks

An awesome list of papers on privacy attacks against machine learning

39
Emerging
12 zhoumingyi/ModelObfuscator

Code for our paper "Modelobfuscator: Obfuscating Model Information to...

38
Emerging
13 brian-lou/Training-Data-Extraction-Attack-on-LLMs

This project explores training data extraction attacks on the LLaMa 7B,...

37
Emerging
14 YujiaBao/ls

Learning to Split for Automatic Bias Detection

37
Emerging
15 MinChen00/UnlearningLeaks

Official implementation of "When Machine Unlearning Jeopardizes Privacy"...

36
Emerging
16 mmalekzadeh/honest-but-curious-nets

Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be...

35
Emerging
17 allensll/Awesome-Crypto-DNN

List of papers on cryptography assisted deep learning privacy computation

35
Emerging
18 MichaelTJC96/Label_Flipping_Attack

The project aims to evaluate the vulnerability of Federated Learning systems...

30
Emerging
19 yangarbiter/dp-dg

What You See is What You Get: Distributional Generalization for Algorithm...

29
Experimental
20 najeebjebreel/lira_analysis

Revisiting the LiRA Membership Inference Attack Under Realistic Assumptions

29
Experimental
21 dahmansphi/attackai

Test tool to simulate two types of poisoning attack on AI model

28
Experimental
22 karthik7129/FL-IoT-Threat_detection

This is the end to end Federated learning pipeline for Iot threat detection

28
Experimental
23 X1aoyangXu/FORA

Official code of the paper "A Stealthy Wrongdoer: Feature-Oriented...

28
Experimental
24 VissaMoutafis/Membership-Inference-Research

Bachelor's Thesis on Membership Inference Attacks

28
Experimental
25 dahmansphi/protectai

Test tool to simulate defense from poisoning attack on AI model

27
Experimental
26 ege-erdogan/unsplit

Supplementary code for the paper "UnSplit: Data-Oblivious Model Inversion,...

27
Experimental
27 FRANCYZXZ/federated-backdoor-mitigation

A comprehensive framework simulating integrity (Backdoor) and privacy...

27
Experimental
28 Pilladian/ml-attack-framework

Universität des Saarlandes - Privacy Enhancing Technologies 2021 - Semester Project

24
Experimental
29 ibhushani/amnesia

🧠 Enterprise-grade Machine Unlearning architecture. Surgically erases data...

24
Experimental
30 trucndt/ami

Codebase for Active Membership Inference Attack under Local Differential...

24
Experimental
31 Jiaqi0602/adversarial-attack-from-leakage

From Gradient Leakage to Adversarial Attacks in Federated Learning

23
Experimental
32 FRANCYZXZ/Federated-Learning-Security-Backdoor-Attacks-Gradient-Inversion-Unlearning

A comprehensive framework simulating integrity (Backdoor) and privacy...

23
Experimental
33 davidemodolo/malicious_llm_finetuning

Proof of concept demonstrating backdoor injection into fine-tuned LLMs using...

22
Experimental
34 davidemodolo/malicious_finetuning

Proof of concept demonstrating backdoor injection into fine-tuned LLMs using...

22
Experimental
35 ljvmiranda921/vs-split

A Python library for creating adversarial splits

21
Experimental
36 Abhishek-yadav04/AgisFL

AgisFL is a cutting-edge, production-ready cybersecurity platform that...

21
Experimental
37 hardware-fab/DLaTA

A Deep Learning-assisted Template Attack Against Dynamic Frequency Scaling...

20
Experimental
38 dAI-SY-Group/PRECODE

Source code and demonstration for our paper "PRECODE - A Generic Model...

20
Experimental
39 hardware-fab/Hound

Hound: Locating Cryptographic Primitives in Desynchronized Side-Channel...

20
Experimental
40 gongzhimin/Copyright-Protection-Studies-in-Deep-Learning

A repository about literature of copyright protection in deep learning.

19
Experimental
41 AmanPriyanshu/The-Unlearning-Protocol

Choose which data to make your model forget (Unlearn!), but watch out -...

18
Experimental
42 DoktorC/double-strike-host2026

Official repository of the paper "Double Strike: Breaking...

17
Experimental
43 zealscott/MIA

Source code for Cascading and Proxy Membership Inference Attacks. NDSS 2026.

14
Experimental
44 VirajM723/MachineUnlearning

Machine unlearning using SISA training to efficiently remove data points...

14
Experimental
45 ege-erdogan/splitguard

Supplementary code for the paper "SplitGuard: Detecting and...

13
Experimental
46 Axelboutie/Deep-Learning-for-Side-Channels-Attacks

This repository provides a model of convutionnal neural network or a MLP to...

13
Experimental
47 paoyw/DLS-MIA

Investigating the privacy vulnerabilities in deep learning steganography...

11
Experimental
48 attackbench/attackbench.github.io

The AttackBench framework wants to fairly compare gradient-based attacks...

10
Experimental