mbzuai-oryx/LLaVA-pp

🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)

36
/ 100
Emerging

This project helps researchers and developers enhance the visual understanding capabilities of large language models. By integrating new models like Phi-3 and LLaMA-3, it allows for more accurate interpretation of images alongside text. The input is an existing LLaVA 1.5 model and relevant training data, and the output is a more powerful multimodal AI model. This is for AI researchers and developers working on multimodal AI.

848 stars. No commits in the last 6 months.

Use this if you are an AI researcher or developer looking to upgrade the visual understanding capabilities of your LLaVA models with the latest language models like Phi-3 and LLaMA-3.

Not ideal if you are an end-user without a technical background in AI model development, as this tool requires familiarity with model training and deployment.

multimodal-ai large-language-models computer-vision machine-learning-research model-fine-tuning
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

848

Forks

61

Language

Python

License

Last pushed

Aug 05, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/mbzuai-oryx/LLaVA-pp"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.