Betswish/MIRAGE
Easy-to-use MIRAGE code for faithful answer attribution in RAG applications. Paper: https://aclanthology.org/2024.emnlp-main.347/
This project helps ensure the answers generated by AI models are accurate and directly supported by the provided source documents. You input a question, a set of relevant documents, and optionally, an AI-generated answer. It then outputs the AI's answer with clear links, or 'attributions,' showing exactly which parts of the answer came from which source documents. This is for AI developers, researchers, or anyone building or evaluating AI-powered question-answering systems.
No commits in the last 6 months.
Use this if you need to verify the factual accuracy and source-groundedness of AI-generated answers in your retrieval-augmented generation (RAG) applications.
Not ideal if you are looking for a general-purpose AI model for generating answers without needing detailed source attribution.
Stars
26
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 10, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/Betswish/MIRAGE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
onestardao/WFGY
WFGY: open-source reasoning and debugging infrastructure for RAG and AI agents. Includes the...
KRLabsOrg/verbatim-rag
Hallucination-prevention RAG system with verbatim span extraction. Ensures all generated content...
iMoonLab/Hyper-RAG
"Hyper-RAG: Combating LLM Hallucinations using Hypergraph-Driven Retrieval-Augmented Generation"...
frmoretto/clarity-gate
Stop LLMs from hallucinating your guesses as facts. Clarity Gate is a verification protocol for...
project-miracl/nomiracl
NoMIRACL: A multilingual hallucination evaluation dataset to evaluate LLM robustness in RAG...