privacytrustlab/ml_privacy_meter
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
This tool helps data privacy officers and risk managers assess how much private information might be leaking from machine learning models. You provide your trained AI models and the data used to train them, and it generates reports quantifying the privacy risks, such as whether specific individuals' data can be identified. This is for professionals responsible for ensuring AI systems comply with privacy regulations.
703 stars. No commits in the last 6 months.
Use this if you need to quantitatively evaluate and verify the privacy of individuals' data used in your AI systems, especially for compliance with regulations like GDPR.
Not ideal if you are looking for a tool to anonymize your data or to develop privacy-preserving machine learning models directly, as this focuses on auditing existing models.
Stars
703
Forks
150
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Apr 26, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/privacytrustlab/ml_privacy_meter"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
meta-pytorch/opacus
Training PyTorch models with differential privacy
tensorflow/privacy
Library for training machine learning models with privacy for training data
tf-encrypted/tf-encrypted
A Framework for Encrypted Machine Learning in TensorFlow
awslabs/fast-differential-privacy
Fast, memory-efficient, scalable optimization of deep learning with differential privacy
IBM/differential-privacy-library
Diffprivlib: The IBM Differential Privacy Library