audeering/w2v2-how-to

How to use our public wav2vec2 dimensional emotion model

43
/ 100
Emerging

This project helps you understand the emotional content of spoken audio. You input raw speech audio, and it outputs numerical values representing arousal, dominance, and valence (the three dimensions of emotion). This is useful for researchers and practitioners studying human emotion in speech.

542 stars. No commits in the last 6 months.

Use this if you need to analyze the emotional state expressed in speech, like in psychological studies or human-computer interaction research.

Not ideal if you need to detect specific discrete emotions (like 'happy' or 'sad') rather than continuous emotional dimensions.

speech-emotion-recognition audio-analysis psychological-research sentiment-analysis human-computer-interaction
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

542

Forks

51

Language

Jupyter Notebook

License

MIT

Last pushed

May 22, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/audeering/w2v2-how-to"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.