xiaoachen98/Open-LLaVA-NeXT

An open-source implementation for training LLaVA-NeXT.

31
/ 100
Emerging

This project provides an open-source framework for building and training advanced AI models that can understand both images and text. It takes raw image and text data, processes them, and outputs a trained multi-modal AI model capable of complex visual reasoning and question-answering. Researchers and AI practitioners working on next-generation AI vision-language systems will find this useful for developing new capabilities.

436 stars. No commits in the last 6 months.

Use this if you are an AI researcher or practitioner looking to train custom large multi-modal models (LMMs) that can interpret and respond to visual and textual information.

Not ideal if you are looking for a pre-built, ready-to-use AI application or a simple API to integrate into an existing product without deep AI model training knowledge.

AI-research multi-modal-AI computer-vision natural-language-processing machine-learning-engineering
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 13 / 25

How are scores calculated?

Stars

436

Forks

23

Language

Python

License

Last pushed

Oct 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/xiaoachen98/Open-LLaVA-NeXT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.