Victorwz/VaLM
VaLM: Visually-augmented Language Modeling. ICLR 2023.
This project helps researchers in natural language processing (NLP) develop and evaluate language models that can 'see' as well as read. You would input large datasets of text and images, and it outputs a trained language model capable of more accurate language understanding and generation by integrating visual information. Researchers focused on advancing multimodal AI would find this useful.
No commits in the last 6 months.
Use this if you are an AI researcher looking to build and experiment with next-generation language models that leverage visual context to understand and generate text more effectively.
Not ideal if you need an out-of-the-box solution for specific image-text tasks like image captioning or visual question answering without deep model development.
Stars
56
Forks
3
Language
Python
License
—
Category
Last pushed
Mar 06, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Victorwz/VaLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kyegomez/RT-X
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment:...
kyegomez/PALI3
Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"
chuanyangjin/MMToM-QA
[🏆Outstanding Paper Award at ACL 2024] MMToM-QA: Multimodal Theory of Mind Question Answering
lyuchenyang/Macaw-LLM
Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
Muennighoff/vilio
🥶Vilio: State-of-the-art VL models in PyTorch & PaddlePaddle