ege-erdogan/splitguard
Supplementary code for the paper "SplitGuard: Detecting and MitigatingTraining-Hijacking Attacks in Split Learning"
This project helps organizations using split learning to protect sensitive input data from malicious servers. It provides tools to detect if a server is attempting to manipulate the client model to expose private information. The end-user would be a data privacy officer or a machine learning operations engineer responsible for securing distributed deep learning systems.
No commits in the last 6 months.
Use this if you are a client in a split learning setup and need to ensure your private data inputs are not being compromised by a rogue server.
Not ideal if you are looking for general privacy-preserving machine learning techniques outside of the split learning paradigm.
Stars
12
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Jan 15, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ege-erdogan/splitguard"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/scaaml
SCAAML: Side Channel Attacks Assisted with Machine Learning
pralab/secml
A Python library for Secure and Explainable Machine Learning
Koukyosyumei/AIJack
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
AI-SDC/SACRO-ML
Collection of tools and resources for managing the statistical disclosure control of trained...
oss-slu/mithridatium
Mithridatium is a research-driven project aimed at detecting backdoors and data poisoning in...