AI-SDC/SACRO-ML
Collection of tools and resources for managing the statistical disclosure control of trained machine learning models
This project helps data scientists and ML engineers ensure the privacy of sensitive data used in their machine learning models. It takes your trained classification model and its training data, then evaluates its vulnerability to attacks that could expose confidential information. The output is a human-readable report summarizing the disclosure risks, helping you make models safer before or after training.
Use this if you are building or deploying machine learning models and need to quantify and mitigate the risk of private training data being revealed through model behavior.
Not ideal if your primary concern is model interpretability or general bias detection, as this tool specifically focuses on data disclosure risk.
Stars
34
Forks
8
Language
Python
License
MIT
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AI-SDC/SACRO-ML"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
google/scaaml
SCAAML: Side Channel Attacks Assisted with Machine Learning
pralab/secml
A Python library for Secure and Explainable Machine Learning
Koukyosyumei/AIJack
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
oss-slu/mithridatium
Mithridatium is a research-driven project aimed at detecting backdoors and data poisoning in...
matteonerini/pin-side-channel-attacks
Machine Learning for PIN Side-Channel Attacks Based on Smartphone Motion Sensors