MichaelTJC96/Label_Flipping_Attack

The project aims to evaluate the vulnerability of Federated Learning systems to targeted data poisoning attack known as Label Flipping Attack. The project studies the scenario that a malicious participant can only manipulate the raw training data on their device. Hence, non-expert malicious participants can achieve poisoning without knowing the model type, the parameters, and the Federated Learning process. In addition, the project also analyses the possibility and effectiveness of concealing the tracks while poisoning the raw data of other devices.

30
/ 100
Emerging

This project helps evaluate the security of Federated Learning systems against a specific type of data poisoning attack called a 'Label Flipping Attack'. It takes raw training data, simulates an attack where a malicious participant alters data labels, and then shows how well the system withstands or is compromised by the attack. This is designed for security researchers, data scientists, and machine learning engineers working with distributed learning models.

No commits in the last 6 months.

Use this if you need to assess the vulnerability of your Federated Learning models to malicious participants who might covertly manipulate training data labels on their devices.

Not ideal if you are looking for a general-purpose tool to defend against all types of machine learning attacks or if your system does not involve Federated Learning.

federated-learning data-poisoning machine-learning-security distributed-ai model-vulnerability
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

22

Forks

7

Language

Python

License

Last pushed

Jan 05, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/MichaelTJC96/Label_Flipping_Attack"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.