declare-lab/MM-Align
[EMNLP 2022] This repository contains the official implementation of the paper "MM-Align: Learning Optimal Transport-based Alignment Dynamics for Fast and Accurate Inference on Missing Modality Sequences"
This project helps researchers and scientists working with multimodal data, like audio, video, and text from conversations, to analyze it even when some parts are missing. It takes these diverse data streams, even incomplete ones, and processes them to provide accurate analysis and predictions. This is for researchers in natural language processing, affective computing, or human-computer interaction.
No commits in the last 6 months.
Use this if you are analyzing emotional content or communication patterns from multimodal datasets (like spoken dialogue with video) where some modalities (e.g., video or audio) might be partially or entirely unavailable.
Not ideal if your data is purely unimodal (e.g., only text) or if you are not dealing with sequential data where alignment between modalities is crucial.
Stars
33
Forks
2
Language
Python
License
MIT
Category
Last pushed
Mar 10, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/declare-lab/MM-Align"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kyegomez/RT-X
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment:...
kyegomez/PALI3
Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"
chuanyangjin/MMToM-QA
[🏆Outstanding Paper Award at ACL 2024] MMToM-QA: Multimodal Theory of Mind Question Answering
lyuchenyang/Macaw-LLM
Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
Muennighoff/vilio
🥶Vilio: State-of-the-art VL models in PyTorch & PaddlePaddle