tosiyuki/LLaVA-JP
LLaVA-JP is a Japanese VLM trained by LLaVA method
This project provides pre-trained models and code to help you create systems that can understand and respond to questions about images in Japanese. You provide an image and a question in Japanese, and the system generates a natural language answer describing what's in the image or responding to the query. This is useful for developers and AI researchers building multimodal applications for Japanese speakers.
No commits in the last 6 months.
Use this if you need to build or customize a Vision-Language Model (VLM) specifically for understanding Japanese text in conjunction with images, especially if you are working with lightweight Large Language Models.
Not ideal if you are looking for a ready-to-use application and don't have experience with model training or fine-tuning, as this provides code and models for development, not an end-user tool.
Stars
64
Forks
13
Language
Python
License
Apache-2.0
Category
Last pushed
Jul 03, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/tosiyuki/LLaVA-JP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice