AdityaDutt/Audio-Classification-Using-Wavelet-Transform

Classifying audio using Wavelet transform and deep learning

30
/ 100
Emerging

This project helps audio engineers or researchers classify spoken words from different speakers. It takes raw audio recordings as input and uses advanced signal processing (wavelet transform) combined with deep learning to identify who is speaking. The output is a classification of the speaker for each audio segment. Someone working with audio forensics, voice biometrics, or speech recognition research would find this useful.

No commits in the last 6 months.

Use this if you need a step-by-step guide and practical example for distinguishing between different speakers based on their voice using sophisticated audio features.

Not ideal if you need a ready-to-use application for real-time voice identification or a system that can classify a large number of speakers outside of the provided dataset.

audio-classification speaker-recognition speech-analysis voice-biometrics sound-engineering
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

35

Forks

6

Language

Python

License

Last pushed

Sep 05, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AdityaDutt/Audio-Classification-Using-Wavelet-Transform"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.