ZhishanQ/QuCo-RAG

Official code implementation of the paper: QuCo-RAG: Quantifying Uncertainty from the Pre-training Corpus for Dynamic Retrieval-Augmented Generation

36
/ 100
Emerging

This project helps developers and researchers working with large language models (LLMs) to make their models more reliable and less prone to generating incorrect information, known as "hallucinations." It takes a query or statement for an LLM and uses pre-training data statistics to determine when the LLM should retrieve external information versus generating a response from its own knowledge, outputting a more accurate and contextually relevant answer. This is designed for AI researchers, ML engineers, and data scientists who build or deploy LLM-powered applications.

Use this if you are building retrieval-augmented generation (RAG) systems and want a robust, data-driven method to decide when your LLM should fetch external knowledge to reduce factual errors.

Not ideal if you are looking for a plug-and-play solution for end-users or if your primary concern is not about improving the factual accuracy and reducing hallucinations in LLM outputs.

large-language-models retrieval-augmented-generation natural-language-processing hallucination-prevention AI-reliability
No Package No Dependents
Maintenance 6 / 25
Adoption 7 / 25
Maturity 13 / 25
Community 10 / 25

How are scores calculated?

Stars

38

Forks

4

Language

Python

License

MIT

Last pushed

Jan 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/ZhishanQ/QuCo-RAG"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.