Dalia-Sher/Speech-Emotion-Recognition-using-BLSTM-with-Attention

We present a study of a neural network based method for speech emotion recognition, using audio-only features. In the studied scheme, the acoustic features are extracted from the audio utterances and fed to a neural network that consists of CNN layers, BLSTM combined with an attention mechanism layer, and a fully-connected layer. To illustrate and analyze the classification capabilities of the network we used the t-SNE method. We evaluated our model using RAVDESS and IEMOCAP databases.

20
/ 100
Experimental

This project helps researchers and engineers analyze and classify emotions expressed in spoken audio. It takes raw audio recordings as input and outputs classifications of the emotions present, such as happiness, sadness, or anger. This is useful for anyone studying human affect, designing empathetic AI, or analyzing communication patterns.

No commits in the last 6 months.

Use this if you need to automatically identify and categorize emotions from spoken language in audio files.

Not ideal if you need to analyze emotions from text, images, or video, as this project focuses exclusively on audio-only features.

emotion-recognition audio-analysis affective-computing speech-analysis human-computer-interaction
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 7 / 25

How are scores calculated?

Stars

11

Forks

1

Language

Python

License

Last pushed

Jul 24, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/Dalia-Sher/Speech-Emotion-Recognition-using-BLSTM-with-Attention"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.