pliang279/MultiBench
[NeurIPS 2021] Multiscale Benchmarks for Multimodal Representation Learning
This project offers a standardized platform for researchers and practitioners working with diverse data sources to evaluate and compare different machine learning approaches. It takes in various types of data—like video, audio, text, or physiological signals—and allows you to test how well different algorithms can make predictions or classify outcomes. This is ideal for machine learning researchers and data scientists who need to rigorously assess multimodal models.
615 stars. No commits in the last 6 months.
Use this if you are developing or evaluating machine learning models that integrate multiple types of data (e.g., combining visual and textual information) and need a consistent way to benchmark their performance, complexity, and robustness.
Not ideal if you are a business user looking for a ready-to-deploy solution for a specific business problem, rather than a benchmarking tool for research and development.
Stars
615
Forks
91
Language
HTML
License
MIT
Category
Last pushed
Jan 27, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/pliang279/MultiBench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmpretrain
OpenMMLab Pre-training Toolbox and Benchmark
facebookresearch/mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
HuaizhengZhang/Awsome-Deep-Learning-for-Video-Analysis
Papers, code and datasets about deep learning and multi-modal learning for video analysis
KaiyangZhou/pytorch-vsumm-reinforce
Unsupervised video summarization with deep reinforcement learning (AAAI'18)
adambielski/siamese-triplet
Siamese and triplet networks with online pair/triplet mining in PyTorch