dame-cell/VisionRAG
A new novel multi-modality (Vision) RAG architecture
VisionRAG helps you find answers within various document types, like reports, articles, or presentations, by directly analyzing their visual content. Instead of converting images to text first, it takes screenshots of documents as input and can retrieve relevant sections or generate answers, even from complex layouts with images or charts. This is ideal for researchers, analysts, or anyone who frequently extracts information from a mix of text and visual documents.
No commits in the last 6 months.
Use this if you need to quickly and accurately find information within documents that contain a mix of text, images, charts, and tables, without relying on error-prone text extraction.
Not ideal if your documents are exclusively plain text or if you only need to process small, simple files where traditional text-based search is sufficient.
Stars
40
Forks
2
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Oct 01, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/dame-cell/VisionRAG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
illuin-tech/colpali
The code used to train and run inference with the ColVision models, e.g. ColPali, ColQwen2, and ColSmol.
AnswerDotAI/byaldi
Use late-interaction multi-modal models such as ColPali in just a few lines of code.
jolibrain/colette
Multimodal RAG to search and interact locally with technical documents of any kind
nannib/nbmultirag
Un framework in Italiano ed Inglese, che permette di chattare con i propri documenti in RAG,...
OpenBMB/VisRAG
Parsing-free RAG supported by VLMs