liuyugeng/ML-Doctor
Code for ML Doctor
This tool helps machine learning engineers and researchers assess the security and privacy risks of their trained models. It takes your existing machine learning models and datasets (like facial images or fashion items) as input. It then simulates various inference attacks to measure how vulnerable your models are to risks like membership inference, model inversion, attribute inference, or model stealing, providing a clearer understanding of your model's security posture.
No commits in the last 6 months.
Use this if you are a machine learning engineer concerned about the privacy and security vulnerabilities of your deployed models and want to evaluate their resilience against common inference attacks.
Not ideal if you are looking for a tool to build or train machine learning models from scratch, as this focuses specifically on risk assessment of pre-existing models.
Stars
92
Forks
23
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 14, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/liuyugeng/ML-Doctor"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/scaaml
SCAAML: Side Channel Attacks Assisted with Machine Learning
pralab/secml
A Python library for Secure and Explainable Machine Learning
Koukyosyumei/AIJack
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
AI-SDC/SACRO-ML
Collection of tools and resources for managing the statistical disclosure control of trained...
oss-slu/mithridatium
Mithridatium is a research-driven project aimed at detecting backdoors and data poisoning in...