LLaVA and LLaVA-Mini

LLaVA-Mini is a parameter-efficient variant derived from the original LLaVA architecture, designed to achieve similar multimodal capabilities with reduced computational requirements, making them ecosystem siblings where one serves as a lightweight alternative to the other.

LLaVA
47
Emerging
LLaVA-Mini
41
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 2/25
Adoption 10/25
Maturity 16/25
Community 13/25
Stars: 24,554
Forks: 2,745
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 562
Forks: 30
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About LLaVA

haotian-liu/LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

LLaVA helps you understand and interact with images using natural language. You provide an image and ask questions or give instructions about its content, and it generates descriptive text, answers, or performs tasks like segmentation. This is ideal for anyone needing to extract insights from visuals, such as researchers analyzing images, content creators generating descriptions, or operations teams monitoring visual data.

image-analysis visual-intelligence content-description multimodal-interaction visual-question-answering

About LLaVA-Mini

ictnlp/LLaVA-Mini

LLaVA-Mini is a unified large multimodal model (LMM) that can support the understanding of images, high-resolution images, and videos in an efficient manner.

This project offers a unified large multimodal model that efficiently processes and understands both images and videos. It takes visual inputs (still images or video clips) and provides detailed descriptions or answers to questions about the content. Researchers and developers working with large language models to analyze visual data will find this tool useful for high-performance applications.

multimodal-AI computer-vision video-analysis image-understanding AI-development

Scores updated daily from GitHub, PyPI, and npm data. How scores work