VisRAG and VARAG

These are competitors offering alternative approaches to vision-language model-based RAG: VisRAG emphasizes parsing-free document processing via VLMs, while VARAG prioritizes vision-first retrieval by processing images before text, representing different design philosophies for the same problem space.

VisRAG
49
Emerging
VARAG
37
Emerging
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 17/25
Maintenance 2/25
Adoption 10/25
Maturity 8/25
Community 17/25
Stars: 932
Forks: 71
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 497
Forks: 48
Downloads:
Commits (30d): 0
Language: Python
License:
No Package No Dependents
No License Stale 6m No Package No Dependents

About VisRAG

OpenBMB/VisRAG

Parsing-free RAG supported by VLMs

This project helps anyone needing to extract precise answers from a collection of images or visual documents, like PDFs, without losing crucial visual details. It takes your questions and a set of images, then provides accurate answers by directly understanding the visual evidence. This is ideal for researchers, analysts, or operations managers who work with visual data and need reliable information retrieval.

document-intelligence visual-analysis information-extraction research-assist data-mining

About VARAG

adithya-s-k/VARAG

Vision-Augmented Retrieval and Generation (VARAG) - Vision first RAG Engine

This tool helps people who work with documents containing both text and images to quickly find precise information. You input documents like scanned PDFs, research papers, or infographics, and it helps you retrieve relevant text, figures, or entire pages based on your questions. It's designed for professionals who need to extract insights from complex, visually rich documents.

document-analysis information-retrieval research-analytics content-extraction visual-document-understanding

Scores updated daily from GitHub, PyPI, and npm data. How scores work