microsoft/responsible-ai-toolbox-privacy
A library for statistically estimating the privacy of ML pipelines from membership inference attacks
This tool helps machine learning engineers and researchers assess the privacy risks of their models. By taking the results of a membership inference attack (true positives, true negatives, false positives, false negatives), it estimates the Differential Privacy (DP) epsilon value. This provides a quantifiable measure of how much individual training data can be inferred.
No commits in the last 6 months.
Use this if you are developing or deploying machine learning models and need to empirically estimate their privacy guarantees against membership inference attacks.
Not ideal if you are looking for a method to *implement* differential privacy from scratch or for formal mathematical proofs of privacy.
Stars
37
Forks
8
Language
Python
License
MIT
Category
Last pushed
Aug 21, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/microsoft/responsible-ai-toolbox-privacy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/scaaml
SCAAML: Side Channel Attacks Assisted with Machine Learning
pralab/secml
A Python library for Secure and Explainable Machine Learning
Koukyosyumei/AIJack
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
AI-SDC/SACRO-ML
Collection of tools and resources for managing the statistical disclosure control of trained...
liuyugeng/ML-Doctor
Code for ML Doctor