AdrianBZG/llama-multimodal-vqa
Multimodal Instruction Tuning for Llama 3
This project helps AI developers adapt the Llama 3 language model to understand and respond to questions that require both text and image input. You provide a dataset containing image-text pairs and corresponding question-answer conversations. The output is a fine-tuned Llama 3 model capable of visual question answering. This is for AI engineers or researchers building custom multimodal AI applications.
No commits in the last 6 months.
Use this if you need to train a Llama 3 model to accurately answer questions based on visual information, combined with textual instructions.
Not ideal if you are looking for a ready-to-use application and don't have experience with AI model training or preparing custom datasets.
Stars
51
Forks
11
Language
Python
License
MIT
Category
Last pushed
Apr 25, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AdrianBZG/llama-multimodal-vqa"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
TinyLLaVA/TinyLLaVA_Factory
A Framework of Small-scale Large Multimodal Models
zjunlp/EasyInstruct
[ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.
rese1f/MovieChat
[CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding
haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
NVlabs/Eagle
Eagle: Frontier Vision-Language Models with Data-Centric Strategies