EvolvingLMMs-Lab/LLaVA-OneVision-1.5

Fully Open Framework for Democratized Multimodal Training

47
/ 100
Emerging

This framework helps AI developers and researchers build and train advanced Large Multimodal Models (LMMs) that can understand both images and text. You input diverse image-text datasets, and it outputs highly performant LMMs capable of accurately interpreting visual information at its original resolution. This is for AI practitioners focused on creating cutting-edge multimodal AI applications.

762 stars.

Use this if you need to train your own state-of-the-art LMMs with superior visual understanding and want a cost-efficient, fully open-source framework.

Not ideal if you are looking to use a pre-trained LMM off-the-shelf without custom training or if you lack the technical expertise for model development.

AI model training multimodal AI computer vision natural language processing machine learning research
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 16 / 25

How are scores calculated?

Stars

762

Forks

61

Language

Python

License

Apache-2.0

Last pushed

Dec 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/EvolvingLMMs-Lab/LLaVA-OneVision-1.5"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.