ChaoLinAViy/OMGM
OMGM: Orchestrate Multiple Granularities and Modalities for Efficient Multimodal Retrieval (ACL 2025 Main Conference)
This project helps researchers and data scientists working with question-answering systems that combine text and images. It takes visual questions and a rich knowledge base (containing text summaries and images) as input, then identifies the most relevant encyclopedic entities to answer the question. This is especially useful for scientists or analysts dealing with large datasets where information is spread across different formats and requires precise retrieval.
Use this if you need to accurately retrieve detailed information by connecting visual questions with textual and image-based knowledge from a large, encyclopedic database.
Not ideal if your task involves only text-based information retrieval or if your knowledge base lacks visual components.
Stars
13
Forks
2
Language
Python
License
—
Category
Last pushed
Dec 30, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/ChaoLinAViy/OMGM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
illuin-tech/colpali
The code used to train and run inference with the ColVision models, e.g. ColPali, ColQwen2, and ColSmol.
AnswerDotAI/byaldi
Use late-interaction multi-modal models such as ColPali in just a few lines of code.
jolibrain/colette
Multimodal RAG to search and interact locally with technical documents of any kind
nannib/nbmultirag
Un framework in Italiano ed Inglese, che permette di chattare con i propri documenti in RAG,...
OpenBMB/VisRAG
Parsing-free RAG supported by VLMs