OpenBMB/VisRAG
Parsing-free RAG supported by VLMs
This project helps anyone needing to extract precise answers from a collection of images or visual documents, like PDFs, without losing crucial visual details. It takes your questions and a set of images, then provides accurate answers by directly understanding the visual evidence. This is ideal for researchers, analysts, or operations managers who work with visual data and need reliable information retrieval.
932 stars.
Use this if you need to find specific information or answer complex questions by examining multiple images or document scans where text extraction might miss visual cues.
Not ideal if your data consists purely of text documents or if you primarily need to summarize very long textual content.
Stars
932
Forks
71
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 07, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/OpenBMB/VisRAG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
illuin-tech/colpali
The code used to train and run inference with the ColVision models, e.g. ColPali, ColQwen2, and ColSmol.
AnswerDotAI/byaldi
Use late-interaction multi-modal models such as ColPali in just a few lines of code.
jolibrain/colette
Multimodal RAG to search and interact locally with technical documents of any kind
nannib/nbmultirag
Un framework in Italiano ed Inglese, che permette di chattare con i propri documenti in RAG,...
chiang-yuan/llamp
[EMNLP '25] A web app and Python API for multi-modal RAG framework to ground LLMs on...