Multimodal-Emotion-Recognition and MULTIMODAL-EMOTION-RECOGNITION

These are competitors offering similar multimodal emotion recognition implementations, with A providing a web application interface while B focuses on dataset-based model training, making them alternative approaches to the same problem rather than tools designed to work together.

Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 0/25
Adoption 9/25
Maturity 16/25
Community 20/25
Stars: 1,072
Forks: 318
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: Apache-2.0
Stars: 110
Forks: 26
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: GPL-3.0
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About Multimodal-Emotion-Recognition

maelfabien/Multimodal-Emotion-Recognition

A real time Multimodal Emotion Recognition web app for text, sound and video inputs

This tool helps HR managers or recruiters analyze the emotional state of job candidates. It takes video, audio, and text input from an interview or interaction and provides insights into the candidate's emotions. It's designed for those who need to quickly assess candidate affect in real-time.

recruitment HR-tech candidate-assessment interview-analysis affective-computing

About MULTIMODAL-EMOTION-RECOGNITION

ankurbhatia24/MULTIMODAL-EMOTION-RECOGNITION

Human Emotion Understanding using multimodal dataset.

This project helps researchers and developers build intelligent systems capable of understanding human emotions in real-time conversations. By analyzing spoken words, vocal tone, and facial expressions from video and audio, it identifies emotions like anger, joy, or sadness. The system outputs emotion labels for each turn in a dialogue, enabling more natural and responsive AI interactions for those working on cognitive AI partners or advanced dialogue systems.

AI-robotics conversational-AI human-computer-interaction emotion-recognition dialogue-systems

Scores updated daily from GitHub, PyPI, and npm data. How scores work