google/crossmodal-3600

Crossmodal-3600 dataset

20
/ 100
Experimental

This is a dataset containing 3600 images paired with detailed text descriptions. It helps researchers and AI developers working on systems that need to understand and generate content across both visual and linguistic modalities. The input is a collection of images and their corresponding text, and the output is a resource for training and evaluating AI models.

No commits in the last 6 months.

Use this if you are developing or researching AI models that need to learn from and process both images and their related textual descriptions.

Not ideal if you need a dataset focused solely on a single modality like only images or only text, or if you require data from a highly specialized domain.

AI research computer vision natural language processing multimodal AI dataset
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

HTML

License

Last pushed

Jan 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/google/crossmodal-3600"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.