ys-zong/awesome-self-supervised-multimodal-learning

[T-PAMI] A curated list of self-supervised multimodal learning resources.

26
/ 100
Experimental

This resource curates the latest research in self-supervised multimodal learning, helping researchers and machine learning engineers develop models that understand information from various data types like images, text, and audio, without needing extensive human-labeled data. It provides a structured list of papers and code, categorizing approaches by objective and application. The target audience is academic researchers and advanced practitioners in AI/ML looking to build sophisticated, data-efficient multimodal AI systems.

277 stars. No commits in the last 6 months.

Use this if you are a researcher or advanced ML practitioner aiming to explore or implement cutting-edge self-supervised methods for combining and interpreting diverse data streams like vision, language, and sound.

Not ideal if you are looking for an off-the-shelf software tool or a basic introduction to machine learning; this resource is highly technical and research-focused.

Machine Learning Research Multimodal AI Self-Supervised Learning Deep Learning Computer Vision
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

277

Forks

8

Language

License

Last pushed

Aug 16, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ys-zong/awesome-self-supervised-multimodal-learning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.