zlijingtao/ResSFL
Official Repository for ResSFL (accepted by CVPR '22)
This project helps machine learning engineers and researchers building systems using split federated learning to protect user data. It takes models trained with split federated learning and enhances them to resist 'model inversion attacks,' which can expose sensitive information from the training data. The output is a more secure, privacy-preserving model. This is for professionals managing and deploying machine learning systems where data privacy is critical.
No commits in the last 6 months.
Use this if you are developing or deploying split federated learning models and need to specifically defend against model inversion attacks to protect user privacy.
Not ideal if your primary concern is other types of federated learning attacks, or if you are not working with split federated learning.
Stars
26
Forks
4
Language
Shell
License
MIT
Category
Last pushed
Jun 24, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/zlijingtao/ResSFL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
flwrlabs/flower
Flower: A Friendly Federated AI Framework
JonasGeiping/breaching
Breaching privacy in federated learning scenarios for vision and text
zama-ai/concrete-ml
Concrete ML: Privacy Preserving ML framework using Fully Homomorphic Encryption (FHE), built on...
anupamkliv/FedERA
FedERA is a modular and fully customizable open-source FL framework, aiming to address these...
p2pfl/p2pfl
P2PFL is a decentralized federated learning library that enables federated learning on...