AI-SDC/SACRO-ML

Collection of tools and resources for managing the statistical disclosure control of trained machine learning models

50
/ 100
Established

This project helps data scientists and ML engineers ensure the privacy of sensitive data used in their machine learning models. It takes your trained classification model and its training data, then evaluates its vulnerability to attacks that could expose confidential information. The output is a human-readable report summarizing the disclosure risks, helping you make models safer before or after training.

Use this if you are building or deploying machine learning models and need to quantify and mitigate the risk of private training data being revealed through model behavior.

Not ideal if your primary concern is model interpretability or general bias detection, as this tool specifically focuses on data disclosure risk.

data-privacy machine-learning-governance risk-assessment confidentiality responsible-AI
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

34

Forks

8

Language

Python

License

MIT

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AI-SDC/SACRO-ML"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.