om-ai-lab/ZoomEye
[EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration
This project helps anyone working with images that have many small, detailed elements. It takes an image and a question about its content, and then uses AI to 'zoom in' on different parts of the image, much like a human would, to find the answer. The output is a more accurate answer to your question, especially for images where details matter. This is for AI researchers and practitioners who build or use advanced vision-language models.
Use this if your current multimodal AI models struggle to accurately answer questions about images containing dense information or very fine-grained details.
Not ideal if you primarily work with simple images where the relevant information is easily visible without needing to 'zoom in', or if you are not building/evaluating advanced AI models.
Stars
77
Forks
8
Language
Python
License
—
Category
Last pushed
Nov 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/om-ai-lab/ZoomEye"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model