microsoft/responsible-ai-toolbox-privacy

A library for statistically estimating the privacy of ML pipelines from membership inference attacks

42
/ 100
Emerging

This tool helps machine learning engineers and researchers assess the privacy risks of their models. By taking the results of a membership inference attack (true positives, true negatives, false positives, false negatives), it estimates the Differential Privacy (DP) epsilon value. This provides a quantifiable measure of how much individual training data can be inferred.

No commits in the last 6 months.

Use this if you are developing or deploying machine learning models and need to empirically estimate their privacy guarantees against membership inference attacks.

Not ideal if you are looking for a method to *implement* differential privacy from scratch or for formal mathematical proofs of privacy.

privacy-engineering machine-learning-auditing data-privacy model-security responsible-AI
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

37

Forks

8

Language

Python

License

MIT

Last pushed

Aug 21, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/microsoft/responsible-ai-toolbox-privacy"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.