rafaelvalle/asrgen

Attacking Speaker Recognition with Deep Generative Models

30
/ 100
Emerging

This project helps security researchers and voice biometric developers explore vulnerabilities in speaker recognition systems. It takes existing audio data and uses deep generative models to create synthetic audio samples that can deceive these systems. The output is 'fake' audio that sounds like a target speaker, useful for evaluating the robustness of voice authentication.

No commits in the last 6 months.

Use this if you need to generate adversarial audio samples to test the security and reliability of speaker recognition or voice biometric systems.

Not ideal if you are looking to build a new speaker recognition system or for general audio synthesis unrelated to security testing.

voice-biometrics speaker-recognition security-research adversarial-audio voice-authentication
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

34

Forks

6

Language

Jupyter Notebook

License

Last pushed

Mar 24, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/rafaelvalle/asrgen"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.