ZhishanQ/QuCo-RAG
Official code implementation of the paper: QuCo-RAG: Quantifying Uncertainty from the Pre-training Corpus for Dynamic Retrieval-Augmented Generation
This project helps developers and researchers working with large language models (LLMs) to make their models more reliable and less prone to generating incorrect information, known as "hallucinations." It takes a query or statement for an LLM and uses pre-training data statistics to determine when the LLM should retrieve external information versus generating a response from its own knowledge, outputting a more accurate and contextually relevant answer. This is designed for AI researchers, ML engineers, and data scientists who build or deploy LLM-powered applications.
Use this if you are building retrieval-augmented generation (RAG) systems and want a robust, data-driven method to decide when your LLM should fetch external knowledge to reduce factual errors.
Not ideal if you are looking for a plug-and-play solution for end-users or if your primary concern is not about improving the factual accuracy and reducing hallucinations in LLM outputs.
Stars
38
Forks
4
Language
Python
License
MIT
Category
Last pushed
Jan 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/ZhishanQ/QuCo-RAG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NirDiamant/RAG_Techniques
This repository showcases various advanced techniques for Retrieval-Augmented Generation (RAG)...
VectorInstitute/fed-rag
A framework for fine-tuning retrieval-augmented generation (RAG) systems.
RUC-NLPIR/FlashRAG
⚡FlashRAG: A Python Toolkit for Efficient RAG Research (WWW2025 Resource)
ictnlp/FlexRAG
FlexRAG: A RAG Framework for Information Retrieval and Generation.
Andrew-Jang/RAGHub
A community-driven collection of RAG (Retrieval-Augmented Generation) frameworks, projects, and...