aminul-huq/Adversarial-Examples-For-Audio-Data

Repo for papers to read on adversarial attack and defense techniques in the audio domain.

25
/ 100
Experimental

This collection helps researchers and practitioners understand how audio-based AI systems, like speech-to-text, speaker verification, and voice assistants, can be fooled. It categorizes papers on methods for creating 'adversarial examples' that make AI misinterpret spoken input, and techniques to defend against these attacks. Anyone working on or deploying speech AI who needs to assess and improve the security and robustness of their systems would use this.

No commits in the last 6 months.

Use this if you are developing or deploying AI systems that process speech and want to understand how to make them more resilient to malicious audio inputs or evaluate their current vulnerabilities.

Not ideal if you are looking for ready-to-use code or tools to immediately implement attacks or defenses without diving into the underlying research.

speech-recognition speaker-verification voice-assistants AI-security audio-processing
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

41

Forks

4

Language

License

Last pushed

Dec 06, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/aminul-huq/Adversarial-Examples-For-Audio-Data"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.