aminul-huq/Speech-Command-Classification

Speech command classification on Speech-Command v0.02 dataset using PyTorch and torchaudio. In this example, three models have been trained using the raw signal waveforms, MFCC features and MelSpectogram features.

28
/ 100
Experimental

This project helps classify short spoken commands, like "yes" or "no," from audio recordings. It takes raw audio signals or common audio features as input and outputs the predicted command, allowing applications to react to specific voice inputs. It's designed for someone building voice-controlled interfaces or systems that need to understand simple spoken instructions.

No commits in the last 6 months.

Use this if you need to identify predefined single-word or short phrase commands from spoken audio for applications like smart home devices, accessibility tools, or interactive voice systems.

Not ideal if you need to transcribe long-form speech, understand complex sentences, or detect arbitrary words not included in a fixed command list.

voice-control audio-analysis human-computer-interaction embedded-systems accessibility-tech
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

9

Forks

5

Language

Python

License

Last pushed

Dec 05, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/aminul-huq/Speech-Command-Classification"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.