AndreaLombax/Speech_emotion_recognition

In this work is proposed a speech emotion recognition model based on the extraction of four different features got from RAVDESS sound files and stacking the resulting matrices in a one-dimensional array by taking the mean values along the time axis. Then this array is fed into a 1-D CNN model as input.

28
/ 100
Experimental

This tool helps you automatically identify the emotional state conveyed in spoken audio. You input sound files containing human speech, and it outputs a classification of the emotion expressed, such as sadness, happiness, or anger. It's designed for researchers or practitioners who need to analyze emotional content from audio recordings.

No commits in the last 6 months.

Use this if you need to classify emotions from spoken English audio, particularly from recordings similar to the RAVDESS dataset, for research or analytical purposes.

Not ideal if you need real-time emotion detection, robust performance with highly noisy audio, or analysis of non-English speech.

emotion-analysis speech-processing audio-analytics human-computer-interaction psychology-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Python

License

GPL-3.0

Last pushed

Feb 27, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/AndreaLombax/Speech_emotion_recognition"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.