Hon-Wong/VoRA
[Fully open] [Encoder-free MLLM] Vision as LoRA
VoRA helps AI developers transform a large language model (LLM) into one that can understand and respond to both text and images, known as a Multimodal Large Language Model (MLLM). It takes an existing LLM and visual data (images with captions/questions) to produce an MLLM capable of interpreting visual information and generating relevant text responses. This tool is for AI researchers and engineers who build and customize advanced AI models.
379 stars. No commits in the last 6 months.
Use this if you are an AI developer looking to imbue a standard language model with robust visual comprehension capabilities without needing complex external vision encoders.
Not ideal if you are an end-user seeking a ready-to-use application for image analysis or text-to-image generation, as this is a development tool for model builders.
Stars
379
Forks
31
Language
Python
License
—
Category
Last pushed
Jun 12, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Hon-Wong/VoRA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice