TobyYang7/Llava_Qwen2

Visual Instruction Tuning for Qwen2 Base Model

28
/ 100
Experimental

This project helps integrate and fine-tune advanced large language and vision models for multi-modal analysis. It takes diverse datasets, including specialized financial visuals, and generates a powerful model capable of understanding and responding to visual instructions. This is designed for AI researchers and practitioners who want to build custom multi-modal AI applications.

No commits in the last 6 months.

Use this if you are an AI researcher looking to fine-tune a powerful visual instruction model with enhanced capabilities for various domains, including finance.

Not ideal if you are looking for a ready-to-use application or API without needing to engage in model training and integration.

AI-model-training multi-modal-AI computer-vision natural-language-processing financial-data-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

41

Forks

2

Language

Python

License

Apache-2.0

Category

llm-fine-tuning

Last pushed

Jun 29, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/TobyYang7/Llava_Qwen2"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.