mcp-local-rag and Real-time-web-search-RAG-with-MCP

The conversational agent in B acts as a client that leverages the local RAG-like web search model context protocol (MCP) server provided by A for real-time web search capabilities.

Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 17/25
Maintenance 2/25
Adoption 4/25
Maturity 15/25
Community 13/25
Stars: 118
Forks: 19
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 5
Forks: 2
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About mcp-local-rag

nkapila6/mcp-local-rag

"primitive" RAG-like web search model context protocol (MCP) server that runs locally. ✨ no APIs ✨

This tool helps large language models (LLMs) like Claude perform deep, up-to-date web research and answer questions using current information. It takes your natural language questions, searches across many web sources, extracts the most relevant details, and feeds them back to the LLM. Researchers, analysts, or anyone needing accurate, real-time information from an AI assistant would find this useful.

AI-assistant-research information-retrieval deep-research web-search knowledge-discovery

About Real-time-web-search-RAG-with-MCP

PranithChowdary/Real-time-web-search-RAG-with-MCP

A conversational agent that combines static knowledge (RAG over your docs) with real-time web search using an MCP server.

Scores updated daily from GitHub, PyPI, and npm data. How scores work