PolarisLiu1/LAT
Look As You Think: Unifying Reasoning and Visual Evidence Attribution for Verifiable Document RAG via Reinforcement Learning (Poster of AAAI'26)
This project helps make information retrieved from documents more trustworthy and easier to verify. It takes documents (like PDFs or web pages) and user questions, then produces answers alongside clear, step-by-step reasoning that points directly to the exact visual evidence in the original documents. This is for professionals who need highly reliable and explainable answers from large document sets, like researchers or legal analysts.
Use this if you need to understand not just the answer to a question from a document, but also the detailed reasoning and specific visual evidence that supports each part of that answer.
Not ideal if you only need quick answers without detailed evidence or if your documents are purely text-based without any visual elements that need to be referenced.
Stars
8
Forks
1
Language
Python
License
—
Category
Last pushed
Dec 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/PolarisLiu1/LAT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
onestardao/WFGY
WFGY: open-source reasoning and debugging infrastructure for RAG and AI agents. Includes the...
KRLabsOrg/verbatim-rag
Hallucination-prevention RAG system with verbatim span extraction. Ensures all generated content...
iMoonLab/Hyper-RAG
"Hyper-RAG: Combating LLM Hallucinations using Hypergraph-Driven Retrieval-Augmented Generation"...
frmoretto/clarity-gate
Stop LLMs from hallucinating your guesses as facts. Clarity Gate is a verification protocol for...
project-miracl/nomiracl
NoMIRACL: A multilingual hallucination evaluation dataset to evaluate LLM robustness in RAG...