willxxy/awesome-mmps
Corpus of resources for multimodal machine learning with physiological signals (mmps).
This resource provides a curated collection of academic papers and datasets focused on analyzing human physiological signals together with other data types using machine learning. It helps researchers, particularly those in biomedical or cognitive science fields, discover existing work and relevant datasets for projects involving the analysis of signals like EEG, ECG, or eye movements alongside other data like text or facial expressions. The output is a structured list of relevant research publications and datasets.
151 stars.
Use this if you are a researcher or scientist looking for existing academic literature and datasets on multimodal machine learning using physiological signals for applications like emotion recognition, cognitive load assessment, or language processing.
Not ideal if you are looking for ready-to-use software, code libraries, or direct access to APIs for building machine learning models.
Stars
151
Forks
8
Language
—
License
MIT
Category
Last pushed
Feb 26, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/willxxy/awesome-mmps"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmpretrain
OpenMMLab Pre-training Toolbox and Benchmark
facebookresearch/mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
adambielski/siamese-triplet
Siamese and triplet networks with online pair/triplet mining in PyTorch
HuaizhengZhang/Awsome-Deep-Learning-for-Video-Analysis
Papers, code and datasets about deep learning and multi-modal learning for video analysis
KaiyangZhou/pytorch-vsumm-reinforce
Unsupervised video summarization with deep reinforcement learning (AAAI'18)