AdrianBZG/llama-multimodal-vqa

Multimodal Instruction Tuning for Llama 3

41
/ 100
Emerging

This project helps AI developers adapt the Llama 3 language model to understand and respond to questions that require both text and image input. You provide a dataset containing image-text pairs and corresponding question-answer conversations. The output is a fine-tuned Llama 3 model capable of visual question answering. This is for AI engineers or researchers building custom multimodal AI applications.

No commits in the last 6 months.

Use this if you need to train a Llama 3 model to accurately answer questions based on visual information, combined with textual instructions.

Not ideal if you are looking for a ready-to-use application and don't have experience with AI model training or preparing custom datasets.

AI model training multimodal AI visual question answering large language models custom AI development
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

51

Forks

11

Language

Python

License

MIT

Last pushed

Apr 25, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AdrianBZG/llama-multimodal-vqa"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.