amazon-science/crossmodal-contrastive-learning

CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations, ICCV 2021

39
/ 100
Emerging

This project helps machine learning engineers and researchers in computer vision analyze videos by understanding both their visual content and associated text descriptions. It processes video features and text features, producing a loss value that indicates how well the combined visual and text information represents the video's content. This helps in developing models that can better interpret and categorize video data.

No commits in the last 6 months.

Use this if you are a machine learning engineer working on video understanding and need to train models that learn from both video frames and related textual metadata.

Not ideal if you are looking for a ready-to-use video analysis application, as this project provides a specific loss function for model training rather than an end-to-end solution.

video-understanding multimodal-learning machine-learning-research computer-vision deep-learning-training
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

64

Forks

10

Language

Python

License

Apache-2.0

Last pushed

Feb 07, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/amazon-science/crossmodal-contrastive-learning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.