LLaVA-VL/LLaVA-Plus-Codebase

LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills

42
/ 100
Emerging

This project helps researchers and developers create advanced AI agents that can understand images and use external tools to answer complex questions or perform tasks. It takes an image and a natural language question as input, and outputs a detailed answer, potentially generated by coordinating various specialized AI tools. It is designed for AI researchers and machine learning engineers working on multimodal AI.

763 stars. No commits in the last 6 months.

Use this if you are developing AI models that need to analyze visual information and leverage external tools to provide sophisticated, context-aware responses, rather than just basic image descriptions.

Not ideal if you need a pre-packaged solution for end-users, or if your primary goal is simple image captioning without tool integration.

multimodal-ai ai-agent-development visual-question-answering tool-augmented-ai machine-learning-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

763

Forks

58

Language

Python

License

Apache-2.0

Last pushed

Feb 01, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/LLaVA-VL/LLaVA-Plus-Codebase"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.