ChaoLinAViy/OMGM

OMGM: Orchestrate Multiple Granularities and Modalities for Efficient Multimodal Retrieval (ACL 2025 Main Conference)

29
/ 100
Experimental

This project helps researchers and data scientists working with question-answering systems that combine text and images. It takes visual questions and a rich knowledge base (containing text summaries and images) as input, then identifies the most relevant encyclopedic entities to answer the question. This is especially useful for scientists or analysts dealing with large datasets where information is spread across different formats and requires precise retrieval.

Use this if you need to accurately retrieve detailed information by connecting visual questions with textual and image-based knowledge from a large, encyclopedic database.

Not ideal if your task involves only text-based information retrieval or if your knowledge base lacks visual components.

question-answering multimodal-retrieval knowledge-base-query visual-information-retrieval encyclopedic-search
No License No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 7 / 25
Community 11 / 25

How are scores calculated?

Stars

13

Forks

2

Language

Python

License

Last pushed

Dec 30, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/ChaoLinAViy/OMGM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.