dAI-SY-Group/PRECODE
Source code and demonstration for our paper "PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage".
This project helps machine learning engineers and data scientists protect sensitive information when training neural networks collaboratively, especially with distributed datasets. It takes a standard neural network model and extends it to prevent private training data from being reconstructed by malicious actors, without sacrificing model performance. The output is a robust, privacy-enhanced model ready for deployment.
No commits in the last 6 months.
Use this if you are a machine learning engineer or data scientist concerned about data privacy during collaborative model training, particularly when exchanging gradient information across different data owners or clients.
Not ideal if your primary concern is reducing model training time or if you are not dealing with scenarios where gradient leakage is a privacy risk.
Stars
10
Forks
1
Language
Jupyter Notebook
License
—
Category
Last pushed
Feb 15, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/dAI-SY-Group/PRECODE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/scaaml
SCAAML: Side Channel Attacks Assisted with Machine Learning
pralab/secml
A Python library for Secure and Explainable Machine Learning
Koukyosyumei/AIJack
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
AI-SDC/SACRO-ML
Collection of tools and resources for managing the statistical disclosure control of trained...
oss-slu/mithridatium
Mithridatium is a research-driven project aimed at detecting backdoors and data poisoning in...