Jiaqi0602/adversarial-attack-from-leakage
From Gradient Leakage to Adversarial Attacks in Federated Learning
This project helps machine learning researchers and security professionals understand and demonstrate how private data used in federated learning can be exposed. It takes a trained model's gradients and attempts to reconstruct the original input data, revealing vulnerabilities that could lead to adversarial attacks on classification tasks. This is for those investigating privacy concerns and potential threats in distributed machine learning systems.
No commits in the last 6 months.
Use this if you need to demonstrate how privacy can be breached in a federated learning setup by reconstructing input data from shared gradients.
Not ideal if you are looking for a tool to actively prevent data leakage or directly implement robust privacy-preserving federated learning solutions.
Stars
16
Forks
2
Language
Jupyter Notebook
License
—
Category
Last pushed
Sep 21, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Jiaqi0602/adversarial-attack-from-leakage"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/scaaml
SCAAML: Side Channel Attacks Assisted with Machine Learning
pralab/secml
A Python library for Secure and Explainable Machine Learning
Koukyosyumei/AIJack
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
AI-SDC/SACRO-ML
Collection of tools and resources for managing the statistical disclosure control of trained...
liuyugeng/ML-Doctor
Code for ML Doctor