j-min/VL-T5
PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)
This project helps researchers and developers working with AI to connect what an image shows with relevant text. It takes an image and a question or prompt about it, then generates a text-based answer, description, or translation. It's used by machine learning researchers and AI engineers to build and evaluate models for tasks like visual question answering or image captioning.
374 stars. No commits in the last 6 months.
Use this if you are an AI researcher or developer building or evaluating models that need to understand both images and text together.
Not ideal if you are looking for a ready-to-use application for everyday tasks like generating marketing copy or summarizing documents.
Stars
374
Forks
57
Language
Python
License
MIT
Category
Last pushed
Jul 29, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/j-min/VL-T5"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
dorarad/gansformer
Generative Adversarial Transformers
invictus717/MetaTransformer
Meta-Transformer for Unified Multimodal Learning
rkansal47/MPGAN
The message passing GAN https://arxiv.org/abs/2106.11535 and generative adversarial particle...
Yachay-AI/byt5-geotagging
Confidence and Byt5 - based geotagging model predicting coordinates from text alone.
sisinflab/Ducho
Ducho is a Python framework aimed to extract multimodal features used in multimodal...