pliang279/MultiBench

[NeurIPS 2021] Multiscale Benchmarks for Multimodal Representation Learning

47
/ 100
Emerging

This project offers a standardized platform for researchers and practitioners working with diverse data sources to evaluate and compare different machine learning approaches. It takes in various types of data—like video, audio, text, or physiological signals—and allows you to test how well different algorithms can make predictions or classify outcomes. This is ideal for machine learning researchers and data scientists who need to rigorously assess multimodal models.

615 stars. No commits in the last 6 months.

Use this if you are developing or evaluating machine learning models that integrate multiple types of data (e.g., combining visual and textual information) and need a consistent way to benchmark their performance, complexity, and robustness.

Not ideal if you are a business user looking for a ready-to-deploy solution for a specific business problem, rather than a benchmarking tool for research and development.

multimodal-AI machine-learning-research model-benchmarking affective-computing robotics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

615

Forks

91

Language

HTML

License

MIT

Last pushed

Jan 27, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/pliang279/MultiBench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.