nkapila6/mcp-local-rag

"primitive" RAG-like web search model context protocol (MCP) server that runs locally. ✨ no APIs ✨

53
/ 100
Established

This tool helps large language models (LLMs) like Claude perform deep, up-to-date web research and answer questions using current information. It takes your natural language questions, searches across many web sources, extracts the most relevant details, and feeds them back to the LLM. Researchers, analysts, or anyone needing accurate, real-time information from an AI assistant would find this useful.

118 stars.

Use this if you want your AI assistant to conduct comprehensive, multi-source web searches and provide answers based on the very latest information, rather than just its training data.

Not ideal if you need to perform simple, quick fact-checks that don't require deep, multi-engine research or if you prefer manual web searching.

AI-assistant-research information-retrieval deep-research web-search knowledge-discovery
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

118

Forks

19

Language

Python

License

MIT

Last pushed

Mar 09, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mcp/nkapila6/mcp-local-rag"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.