friedrichor/Awesome-Multimodal-Papers
A curated list of awesome Multimodal studies.
This is a curated collection of research papers focused on multimodal studies, which combine different types of data like images, text, and audio. It allows researchers to quickly find relevant studies, including their publication venues and optional code or project pages. The primary users are researchers and academics in fields like AI, machine learning, and computer vision who need to stay updated on the latest advancements in combining multiple data modalities.
317 stars.
Use this if you are a researcher needing to discover and track cutting-edge academic papers in multimodal AI, across various categories like visual understanding, multimodal generation, and more.
Not ideal if you are looking for ready-to-use software applications or detailed tutorials for implementing multimodal solutions, as this is a list of academic papers.
Stars
317
Forks
23
Language
—
License
MIT
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/friedrichor/Awesome-Multimodal-Papers"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
open-mmlab/mmpretrain
OpenMMLab Pre-training Toolbox and Benchmark
facebookresearch/mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
adambielski/siamese-triplet
Siamese and triplet networks with online pair/triplet mining in PyTorch
HuaizhengZhang/Awsome-Deep-Learning-for-Video-Analysis
Papers, code and datasets about deep learning and multi-modal learning for video analysis
KaiyangZhou/pytorch-vsumm-reinforce
Unsupervised video summarization with deep reinforcement learning (AAAI'18)