joohyung00/lilac
This is the public repository for "LILaC: Late Interacting in Layered Component Graph for Open-domain Multimodal Multihop Retrieval", which is published on EMNLP 2025 Main.
LILaC helps you find precise answers to complex questions by searching across various types of documents, including text, tables, and images. It takes your question and a collection of multimodal documents, then identifies the most relevant information to provide an accurate answer. This tool is ideal for researchers, analysts, or anyone who needs to extract specific answers from large, diverse document sets.
Use this if you need to perform accurate, multi-step searches across documents that contain a mix of text, images, and tables to answer complex questions.
Not ideal if your primary goal is simple keyword search or if your documents are exclusively plain text.
Stars
17
Forks
3
Language
Python
License
—
Category
Last pushed
Nov 12, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/joohyung00/lilac"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
illuin-tech/colpali
The code used to train and run inference with the ColVision models, e.g. ColPali, ColQwen2, and ColSmol.
AnswerDotAI/byaldi
Use late-interaction multi-modal models such as ColPali in just a few lines of code.
jolibrain/colette
Multimodal RAG to search and interact locally with technical documents of any kind
nannib/nbmultirag
Un framework in Italiano ed Inglese, che permette di chattare con i propri documenti in RAG,...
OpenBMB/VisRAG
Parsing-free RAG supported by VLMs