Shaivpidadi/refrag

REFRAG: LLM-powered representations for better RAG retrieval. Improve precision, reduce context size, same speed.

43
/ 100
Emerging

This project helps you get more accurate answers from AI chatbots by making their knowledge base more efficient. It takes your extensive collection of documents and processes them into small, precise pieces, then intelligently compresses less important information at the moment you ask a question. The result is a more focused, relevant context for the AI, reducing costs and improving answer quality, especially for anyone managing large document archives or building AI-powered customer support, research, or knowledge management systems.

Use this if you manage large document collections for AI applications, need to control token costs, and require precise information retrieval for better AI responses.

Not ideal if you have only a small number of documents (e.g., fewer than 100) or if the current token context window of your AI models is not a bottleneck.

knowledge-management AI-applications document-retrieval AI-optimization information-extraction
No Package No Dependents
Maintenance 6 / 25
Adoption 7 / 25
Maturity 13 / 25
Community 17 / 25

How are scores calculated?

Stars

26

Forks

8

Language

Python

License

MIT

Last pushed

Dec 29, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/Shaivpidadi/refrag"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.