dame-cell/VisionRAG

A new novel multi-modality (Vision) RAG architecture

29
/ 100
Experimental

VisionRAG helps you find answers within various document types, like reports, articles, or presentations, by directly analyzing their visual content. Instead of converting images to text first, it takes screenshots of documents as input and can retrieve relevant sections or generate answers, even from complex layouts with images or charts. This is ideal for researchers, analysts, or anyone who frequently extracts information from a mix of text and visual documents.

No commits in the last 6 months.

Use this if you need to quickly and accurately find information within documents that contain a mix of text, images, charts, and tables, without relying on error-prone text extraction.

Not ideal if your documents are exclusively plain text or if you only need to process small, simple files where traditional text-based search is sufficient.

document-analysis information-retrieval research-assist content-discovery knowledge-management
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

40

Forks

2

Language

Jupyter Notebook

License

MIT

Last pushed

Oct 01, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/dame-cell/VisionRAG"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.