imantdaunhawer/multimodal-contrastive-learning

[ICLR 2023] Official code for the paper "Identifiability Results for Multimodal Contrastive Learning"

34
/ 100
Emerging

This is a research project providing code for deep learning researchers. It takes in structured numerical data or image/text pairs and trains models to identify distinct underlying factors from multimodal data. The output is a trained model and evaluation results, useful for researchers studying representation learning and multimodal data analysis.

No commits in the last 6 months.

Use this if you are a machine learning researcher exploring the theoretical underpinnings and practical application of multimodal contrastive learning for identifiability.

Not ideal if you are a practitioner looking for a ready-to-use tool for general-purpose multimodal data analysis without deep expertise in machine learning research.

deep-learning-research representation-learning multimodal-data identifiability machine-learning-theory
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

35

Forks

4

Language

Python

License

Last pushed

Mar 17, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/imantdaunhawer/multimodal-contrastive-learning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.