X1aoyangXu/FORA

Official code of the paper "A Stealthy Wrongdoer: Feature-Oriented Reconstruction Attack against Split Learning".

28
/ 100
Experimental

This project offers a way to assess the privacy vulnerabilities of split learning systems. It takes as input the 'smashed data' from a client and auxiliary public data on the server side, then outputs a reconstruction of the client's private training data. It's intended for privacy researchers, security auditors, or AI system designers working with distributed machine learning to evaluate potential data breaches.

No commits in the last 6 months.

Use this if you need to test how easily a malicious server can reconstruct sensitive client data in a split learning setup, even with limited prior knowledge.

Not ideal if you are looking for a defense mechanism against data reconstruction attacks, as this tool focuses on demonstrating the attack itself.

privacy-research machine-learning-security distributed-learning data-privacy vulnerability-testing
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

15

Forks

3

Language

Python

License

Last pushed

Sep 11, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/X1aoyangXu/FORA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.