mbzuai-oryx/LLaVA-pp
🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)
This project helps researchers and developers enhance the visual understanding capabilities of large language models. By integrating new models like Phi-3 and LLaMA-3, it allows for more accurate interpretation of images alongside text. The input is an existing LLaVA 1.5 model and relevant training data, and the output is a more powerful multimodal AI model. This is for AI researchers and developers working on multimodal AI.
848 stars. No commits in the last 6 months.
Use this if you are an AI researcher or developer looking to upgrade the visual understanding capabilities of your LLaVA models with the latest language models like Phi-3 and LLaMA-3.
Not ideal if you are an end-user without a technical background in AI model development, as this tool requires familiarity with model training and deployment.
Stars
848
Forks
61
Language
Python
License
—
Category
Last pushed
Aug 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/mbzuai-oryx/LLaVA-pp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model