kyegomez/Qwen-VL
My personal implementation of the model from "Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities", they haven't released model code yet sooo...
This project offers a foundational building block for AI developers working with multimodal data. It takes in both image and text data, processing them through a vision-language model to produce integrated outputs. The primary users are AI researchers and machine learning engineers looking to implement or experiment with advanced vision-language capabilities in their applications.
No commits in the last 6 months.
Use this if you are an AI developer or researcher needing to integrate image understanding with text processing within a single model architecture.
Not ideal if you are an end-user seeking a ready-to-use application, as this is a developer-focused implementation requiring coding expertise.
Stars
12
Forks
2
Language
Python
License
MIT
Category
Last pushed
Jan 29, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/Qwen-VL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
TinyLLaVA/TinyLLaVA_Factory
A Framework of Small-scale Large Multimodal Models
zjunlp/EasyInstruct
[ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.
rese1f/MovieChat
[CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding
haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
NVlabs/Eagle
Eagle: Frontier Vision-Language Models with Data-Centric Strategies