awesome-data-poisoning-and-backdoor-attacks and Awesome-Backdoor-in-Deep-Learning

These are complementary curated resource lists that cover overlapping but distinct scopes—the first encompasses both data poisoning and backdoor attacks while the second focuses specifically on backdoor attacks in deep learning—making them useful to consult together for comprehensive coverage of poisoning and backdoor defense literature.

Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 15/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 11/25
Stars: 287
Forks: 26
Downloads:
Commits (30d): 0
Language:
License: MIT
Stars: 237
Forks: 13
Downloads:
Commits (30d): 0
Language: Python
License: GPL-3.0
Archived Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About awesome-data-poisoning-and-backdoor-attacks

penghui-yang/awesome-data-poisoning-and-backdoor-attacks

A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)

About Awesome-Backdoor-in-Deep-Learning

zihao-ai/Awesome-Backdoor-in-Deep-Learning

A curated list of papers & resources on backdoor attacks and defenses in deep learning.

This resource helps machine learning engineers and researchers understand and mitigate security risks in deep learning models. It provides a comprehensive collection of papers and resources on 'backdoor attacks'—malicious hidden functions in models—and 'backdoor defenses' to protect against them. You can use this to research various attack methods on different model types and find corresponding defense strategies.

AI security deep learning security model robustness adversarial machine learning federated learning security

Scores updated daily from GitHub, PyPI, and npm data. How scores work