ankurbhatia24/MULTIMODAL-EMOTION-RECOGNITION
Human Emotion Understanding using multimodal dataset.
This project helps researchers and developers build intelligent systems capable of understanding human emotions in real-time conversations. By analyzing spoken words, vocal tone, and facial expressions from video and audio, it identifies emotions like anger, joy, or sadness. The system outputs emotion labels for each turn in a dialogue, enabling more natural and responsive AI interactions for those working on cognitive AI partners or advanced dialogue systems.
110 stars. No commits in the last 6 months.
Use this if you are developing AI agents or conversational systems and need to accurately detect human emotions from spoken dialogue, including visual and auditory cues.
Not ideal if you only need to analyze emotions from text, or if your application requires analyzing non-conversational, static images or video.
Stars
110
Forks
26
Language
Jupyter Notebook
License
GPL-3.0
Category
Last pushed
Jul 27, 2020
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ankurbhatia24/MULTIMODAL-EMOTION-RECOGNITION"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MiteshPuthran/Speech-Emotion-Analyzer
The neural network model is capable of detecting five different male/female emotions from audio...
maelfabien/Multimodal-Emotion-Recognition
A real time Multimodal Emotion Recognition web app for text, sound and video inputs
x4nth055/emotion-recognition-using-speech
Building and training Speech Emotion Recognizer that predicts human emotions using Python,...
marcogdepinto/emotion-classification-from-audio-files
Understanding emotions from audio files using neural networks and multiple datasets.
xiamx/awesome-sentiment-analysis
😀😄😂😠A curated list of Sentiment Analysis methods, implementations and misc. 😥😟😱😤