X1aoyangXu/FORA
Official code of the paper "A Stealthy Wrongdoer: Feature-Oriented Reconstruction Attack against Split Learning".
This project offers a way to assess the privacy vulnerabilities of split learning systems. It takes as input the 'smashed data' from a client and auxiliary public data on the server side, then outputs a reconstruction of the client's private training data. It's intended for privacy researchers, security auditors, or AI system designers working with distributed machine learning to evaluate potential data breaches.
No commits in the last 6 months.
Use this if you need to test how easily a malicious server can reconstruct sensitive client data in a split learning setup, even with limited prior knowledge.
Not ideal if you are looking for a defense mechanism against data reconstruction attacks, as this tool focuses on demonstrating the attack itself.
Stars
15
Forks
3
Language
Python
License
—
Category
Last pushed
Sep 11, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/X1aoyangXu/FORA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/scaaml
SCAAML: Side Channel Attacks Assisted with Machine Learning
pralab/secml
A Python library for Secure and Explainable Machine Learning
Koukyosyumei/AIJack
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
AI-SDC/SACRO-ML
Collection of tools and resources for managing the statistical disclosure control of trained...
liuyugeng/ML-Doctor
Code for ML Doctor