LLaVA-VL/LLaVA-Plus-Codebase
LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills
This project helps researchers and developers create advanced AI agents that can understand images and use external tools to answer complex questions or perform tasks. It takes an image and a natural language question as input, and outputs a detailed answer, potentially generated by coordinating various specialized AI tools. It is designed for AI researchers and machine learning engineers working on multimodal AI.
763 stars. No commits in the last 6 months.
Use this if you are developing AI models that need to analyze visual information and leverage external tools to provide sophisticated, context-aware responses, rather than just basic image descriptions.
Not ideal if you need a pre-packaged solution for end-users, or if your primary goal is simple image captioning without tool integration.
Stars
763
Forks
58
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 01, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/LLaVA-VL/LLaVA-Plus-Codebase"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model