ankurbhatia24/MULTIMODAL-EMOTION-RECOGNITION

Human Emotion Understanding using multimodal dataset.

45
/ 100
Emerging

This project helps researchers and developers build intelligent systems capable of understanding human emotions in real-time conversations. By analyzing spoken words, vocal tone, and facial expressions from video and audio, it identifies emotions like anger, joy, or sadness. The system outputs emotion labels for each turn in a dialogue, enabling more natural and responsive AI interactions for those working on cognitive AI partners or advanced dialogue systems.

110 stars. No commits in the last 6 months.

Use this if you are developing AI agents or conversational systems and need to accurately detect human emotions from spoken dialogue, including visual and auditory cues.

Not ideal if you only need to analyze emotions from text, or if your application requires analyzing non-conversational, static images or video.

AI-robotics conversational-AI human-computer-interaction emotion-recognition dialogue-systems
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

110

Forks

26

Language

Jupyter Notebook

License

GPL-3.0

Last pushed

Jul 27, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ankurbhatia24/MULTIMODAL-EMOTION-RECOGNITION"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.