Jiaqi0602/adversarial-attack-from-leakage

From Gradient Leakage to Adversarial Attacks in Federated Learning

23
/ 100
Experimental

This project helps machine learning researchers and security professionals understand and demonstrate how private data used in federated learning can be exposed. It takes a trained model's gradients and attempts to reconstruct the original input data, revealing vulnerabilities that could lead to adversarial attacks on classification tasks. This is for those investigating privacy concerns and potential threats in distributed machine learning systems.

No commits in the last 6 months.

Use this if you need to demonstrate how privacy can be breached in a federated learning setup by reconstructing input data from shared gradients.

Not ideal if you are looking for a tool to actively prevent data leakage or directly implement robust privacy-preserving federated learning solutions.

federated-learning data-privacy adversarial-machine-learning model-security image-classification
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 9 / 25

How are scores calculated?

Stars

16

Forks

2

Language

Jupyter Notebook

License

Last pushed

Sep 21, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Jiaqi0602/adversarial-attack-from-leakage"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.