dAI-SY-Group/PRECODE

Source code and demonstration for our paper "PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage".

20
/ 100
Experimental

This project helps machine learning engineers and data scientists protect sensitive information when training neural networks collaboratively, especially with distributed datasets. It takes a standard neural network model and extends it to prevent private training data from being reconstructed by malicious actors, without sacrificing model performance. The output is a robust, privacy-enhanced model ready for deployment.

No commits in the last 6 months.

Use this if you are a machine learning engineer or data scientist concerned about data privacy during collaborative model training, particularly when exchanging gradient information across different data owners or clients.

Not ideal if your primary concern is reducing model training time or if you are not dealing with scenarios where gradient leakage is a privacy risk.

privacy-preserving-AI federated-learning deep-learning-security data-protection gradient-privacy
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Jupyter Notebook

License

Last pushed

Feb 15, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/dAI-SY-Group/PRECODE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.