xiaoachen98/Open-LLaVA-NeXT
An open-source implementation for training LLaVA-NeXT.
This project provides an open-source framework for building and training advanced AI models that can understand both images and text. It takes raw image and text data, processes them, and outputs a trained multi-modal AI model capable of complex visual reasoning and question-answering. Researchers and AI practitioners working on next-generation AI vision-language systems will find this useful for developing new capabilities.
436 stars. No commits in the last 6 months.
Use this if you are an AI researcher or practitioner looking to train custom large multi-modal models (LMMs) that can interpret and respond to visual and textual information.
Not ideal if you are looking for a pre-built, ready-to-use AI application or a simple API to integrate into an existing product without deep AI model training knowledge.
Stars
436
Forks
23
Language
Python
License
—
Category
Last pushed
Oct 23, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/xiaoachen98/Open-LLaVA-NeXT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
TinyLLaVA/TinyLLaVA_Factory
A Framework of Small-scale Large Multimodal Models
zjunlp/EasyInstruct
[ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.
rese1f/MovieChat
[CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding
haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
NVlabs/Eagle
Eagle: Frontier Vision-Language Models with Data-Centric Strategies