zlijingtao/ResSFL

Official Repository for ResSFL (accepted by CVPR '22)

36
/ 100
Emerging

This project helps machine learning engineers and researchers building systems using split federated learning to protect user data. It takes models trained with split federated learning and enhances them to resist 'model inversion attacks,' which can expose sensitive information from the training data. The output is a more secure, privacy-preserving model. This is for professionals managing and deploying machine learning systems where data privacy is critical.

No commits in the last 6 months.

Use this if you are developing or deploying split federated learning models and need to specifically defend against model inversion attacks to protect user privacy.

Not ideal if your primary concern is other types of federated learning attacks, or if you are not working with split federated learning.

federated-learning data-privacy model-security machine-learning-engineering AI-security
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

26

Forks

4

Language

Shell

License

MIT

Last pushed

Jun 24, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/zlijingtao/ResSFL"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.