Xinghui-Wu/KENKU

KENKU: Towards Efficient and Stealthy Black-box Adversarial Attacks against ASR Systems

27
/ 100
Experimental

This project helps security researchers and penetration testers evaluate the robustness of Automatic Speech Recognition (ASR) systems. It takes audio clips (like songs) and target command texts as input to generate altered audio files. These altered files, when played, sound normal to humans but trick ASR systems into transcribing a hidden command. The output is audio that sounds like music but makes an ASR system transcribe a secret message, revealing vulnerabilities.

No commits in the last 6 months.

Use this if you are a security researcher or red team member looking to test the resilience of commercial ASR systems against 'hidden voice command' or 'integrated command' attacks.

Not ideal if you are looking for a general-purpose ASR system or a tool to improve speech recognition accuracy.

ASR security adversarial audio penetration testing voice command vulnerability speech recognition robustness
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

20

Forks

1

Language

Python

License

MIT

Last pushed

Oct 03, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/Xinghui-Wu/KENKU"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.