Maverick0351a/neuralcache
NeuralCache is a drop-in reranker for Retrieval-Augmented Generation (RAG) that learns which context the model actually uses.
This project helps anyone building an AI assistant or chatbot improve the quality and relevance of its responses. You provide the user's question and a list of possible answers (like articles or documents), and it intelligently reorders them to prioritize the most helpful information. This makes sure your AI uses the best context, leading to more accurate and useful replies for your customers, employees, or users.
Use this if your AI applications, like customer support copilots or internal knowledge bases, struggle to find the most relevant information from a large set of documents, leading to unhelpful or generic responses.
Not ideal if you are looking for a standalone large language model or a complete AI application, as this tool focuses specifically on improving how context is selected.
Stars
12
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 02, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/Maverick0351a/neuralcache"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
yichuan-w/LEANN
[MLsys2026]: RAG on Everything with LEANN. Enjoy 97% storage savings while running a fast,...
byerlikaya/SmartRAG
Multi-Modal RAG for .NET — query databases, documents, images and audio in natural language....
aws-samples/layout-aware-document-processing-and-retrieval-augmented-generation
Advanced document extraction and chunking techniques for retrieval augmented generation that is...
sourangshupal/simple-rag-langchain
Exploring the Basics of Langchain
sion42x/llama-index-milvus-example
Open AI APIs with Llama Index and Milvus Vector DB for Retrieval Augmented Generation (RAG) testing