IliaZenkov/transformer-cnn-emotion-recognition

Speech Emotion Classification with novel Parallel CNN-Transformer model built with PyTorch, plus thorough explanations of CNNs, Transformers, and everything in between

48
/ 100
Emerging

This project helps classify emotions from speech. It takes audio recordings as input and identifies one of eight emotions, such as happiness, sadness, or anger. This would be used by researchers in psychology, human-computer interaction, or anyone analyzing emotional content in spoken language.

266 stars. No commits in the last 6 months.

Use this if you need to automatically detect and classify emotional states from spoken audio clips.

Not ideal if you need real-time emotion detection in live conversations or fine-grained emotional nuances beyond eight basic categories.

speech-analysis emotion-recognition audio-classification human-computer-interaction psychology-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

266

Forks

51

Language

Jupyter Notebook

License

MIT

Last pushed

Nov 06, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/IliaZenkov/transformer-cnn-emotion-recognition"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.