williamdevena/Defending-federated-learning-system

Implementation of a client reputation, gradient checking and homomorphic encryption mechanism to defend a federated learning system from data/model poisoning and reverse engineering attacks.

29
/ 100
Experimental

This project helps machine learning engineers and MLOps specialists secure their federated learning systems. It takes a federated learning setup, identifies potential model or data poisoning, and applies defenses like gradient checking and client reputation. The output is a more robust and secure federated learning environment, protected from common attacks.

No commits in the last 6 months.

Use this if you are deploying federated learning models and need to protect them from malicious data or model poisoning, as well as reverse engineering attempts.

Not ideal if you are working with traditional, centralized machine learning models or looking for general cybersecurity solutions outside of federated learning.

federated-learning MLOps model-security data-poisoning AI-security
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

17

Forks

4

Language

Python

License

Last pushed

Jan 11, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/williamdevena/Defending-federated-learning-system"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.