declare-lab/MM-Align

[EMNLP 2022] This repository contains the official implementation of the paper "MM-Align: Learning Optimal Transport-based Alignment Dynamics for Fast and Accurate Inference on Missing Modality Sequences"

29
/ 100
Experimental

This project helps researchers and scientists working with multimodal data, like audio, video, and text from conversations, to analyze it even when some parts are missing. It takes these diverse data streams, even incomplete ones, and processes them to provide accurate analysis and predictions. This is for researchers in natural language processing, affective computing, or human-computer interaction.

No commits in the last 6 months.

Use this if you are analyzing emotional content or communication patterns from multimodal datasets (like spoken dialogue with video) where some modalities (e.g., video or audio) might be partially or entirely unavailable.

Not ideal if your data is purely unimodal (e.g., only text) or if you are not dealing with sequential data where alignment between modalities is crucial.

multimodal-sentiment-analysis missing-data-imputation affective-computing human-computer-interaction conversation-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

33

Forks

2

Language

Python

License

MIT

Last pushed

Mar 10, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/declare-lab/MM-Align"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.