Shaivpidadi/refrag
REFRAG: LLM-powered representations for better RAG retrieval. Improve precision, reduce context size, same speed.
This project helps you get more accurate answers from AI chatbots by making their knowledge base more efficient. It takes your extensive collection of documents and processes them into small, precise pieces, then intelligently compresses less important information at the moment you ask a question. The result is a more focused, relevant context for the AI, reducing costs and improving answer quality, especially for anyone managing large document archives or building AI-powered customer support, research, or knowledge management systems.
Use this if you manage large document collections for AI applications, need to control token costs, and require precise information retrieval for better AI responses.
Not ideal if you have only a small number of documents (e.g., fewer than 100) or if the current token context window of your AI models is not a bottleneck.
Stars
26
Forks
8
Language
Python
License
MIT
Category
Last pushed
Dec 29, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/Shaivpidadi/refrag"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Marker-Inc-Korea/AutoRAG
AutoRAG: An Open-Source Framework for Retrieval-Augmented Generation (RAG) Evaluation &...
jxzhangjhu/Awesome-LLM-RAG
Awesome-LLM-RAG: a curated list of advanced retrieval augmented generation (RAG) in Large Language Models
IntelLabs/RAG-FiT
Framework for enhancing LLMs for RAG tasks using fine-tuning.
coree/awesome-rag
A curated list of retrieval-augmented generation (RAG) in large language models
IntelLabs/fastRAG
Efficient Retrieval Augmentation and Generation Framework