M3-IT/YING-VLM
Vision Large Language Models trained on M3IT instruction tuning dataset
This tool helps researchers and AI developers work with Vision Large Language Models (VLMs). You input an image and a text instruction or question, and it generates a relevant text response based on the image's content. It's designed for those building applications that need to understand and describe visual information.
No commits in the last 6 months.
Use this if you are a researcher or developer building applications that require a model to answer questions or follow instructions about images.
Not ideal if you need a pre-built application or a low-code solution for image analysis, as this requires programming knowledge to integrate.
Stars
17
Forks
—
Language
Python
License
—
Category
Last pushed
Aug 16, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/M3-IT/YING-VLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice