om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
This project offers a way to train Vision-Language Models (VLMs) that can better understand and identify specific objects within images based on descriptive text. It takes images and textual descriptions as input, and outputs a VLM capable of precisely locating objects, even in new or unfamiliar visual contexts. Researchers and developers working on advanced visual AI applications will find this beneficial for improving model accuracy and generalization.
5,864 stars.
Use this if you need to train Vision-Language Models for tasks like 'Referring Expression Comprehension' (REC) or 'Open-Vocabulary Object Detection' (OVD) and require superior performance and generalization, especially on data outside of the initial training set.
Not ideal if you are looking for an off-the-shelf application to use directly without model training or if your primary need is general image classification without precise object localization based on text.
Stars
5,864
Forks
377
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/om-ai-lab/VLM-R1"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice
bytedance/video-SALMONN-2
video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates...