williamdevena/Defending-federated-learning-system
Implementation of a client reputation, gradient checking and homomorphic encryption mechanism to defend a federated learning system from data/model poisoning and reverse engineering attacks.
This project helps machine learning engineers and MLOps specialists secure their federated learning systems. It takes a federated learning setup, identifies potential model or data poisoning, and applies defenses like gradient checking and client reputation. The output is a more robust and secure federated learning environment, protected from common attacks.
No commits in the last 6 months.
Use this if you are deploying federated learning models and need to protect them from malicious data or model poisoning, as well as reverse engineering attempts.
Not ideal if you are working with traditional, centralized machine learning models or looking for general cybersecurity solutions outside of federated learning.
Stars
17
Forks
4
Language
Python
License
—
Category
Last pushed
Jan 11, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/williamdevena/Defending-federated-learning-system"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
tensorflow/privacy
Library for training machine learning models with privacy for training data
meta-pytorch/opacus
Training PyTorch models with differential privacy
tf-encrypted/tf-encrypted
A Framework for Encrypted Machine Learning in TensorFlow
awslabs/fast-differential-privacy
Fast, memory-efficient, scalable optimization of deep learning with differential privacy
privacytrustlab/ml_privacy_meter
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning...