habla-liaa/ser-with-w2v2

Official implementation of INTERSPEECH 2021 paper 'Emotion Recognition from Speech Using Wav2vec 2.0 Embeddings'

37
/ 100
Emerging

This project helps researchers and data scientists analyze emotions expressed in speech. You provide audio recordings, and it processes them to identify specific emotions like anger, happiness, or sadness. It's designed for someone studying how to automatically detect human emotions from spoken language, particularly within controlled experimental settings.

140 stars. No commits in the last 6 months.

Use this if you need to replicate or build upon state-of-the-art research in speech emotion recognition using specific, pre-trained models on benchmark datasets like RAVDESS or IEMOCAP.

Not ideal if you need a plug-and-play solution for emotion recognition on real-world, noisy audio, or diverse datasets, as the models are trained on clean, acted speech.

speech-emotion-recognition audio-analysis human-computer-interaction-research affective-computing
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 19 / 25

How are scores calculated?

Stars

140

Forks

25

Language

Jupyter Notebook

License

Last pushed

Jan 06, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/habla-liaa/ser-with-w2v2"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.