mcp-rag-server and supernova-mcp-rag

The `shabib87/supernova-mcp-rag` project appears to be a practical proof-of-concept demonstrating how to build and run a local Model Context Protocol (MCP) server for Retrieval-Augmented Generation (RAG), which could potentially integrate with or be inspired by the architecture of the `kwanLeeFrmVi/mcp-rag-server`, making them ecosystem siblings where one is a specific implementation or example related to the broader capability offered by the other.

mcp-rag-server
38
Emerging
supernova-mcp-rag
32
Emerging
Maintenance 2/25
Adoption 7/25
Maturity 16/25
Community 13/25
Maintenance 2/25
Adoption 1/25
Maturity 15/25
Community 14/25
Stars: 25
Forks: 4
Downloads:
Commits (30d): 0
Language: TypeScript
License: MIT
Stars: 1
Forks: 3
Downloads:
Commits (30d): 0
Language: TypeScript
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About mcp-rag-server

kwanLeeFrmVi/mcp-rag-server

mcp-rag-server is a Model Context Protocol (MCP) server that enables Retrieval Augmented Generation (RAG) capabilities. It empowers Large Language Models (LLMs) to answer questions based on your document content by indexing and retrieving relevant information efficiently.

This is a tool for developers who integrate large language models (LLMs) into applications. It takes your collection of documents, like text files or markdown, and turns them into a searchable index. This index then helps your LLM provide more accurate and context-aware answers based on your specific content, rather than just its general training data.

LLM-integration developer-tool information-retrieval contextual-AI AI-application-development

About supernova-mcp-rag

shabib87/supernova-mcp-rag

A practical POC demonstrating how to build and run a local MCP server with Retrieval-Augmented Generation (RAG) for semantic search over internal documentation. Leverages Node.js, TypeScript, Hugging Face embeddings, and an in-memory vector store to enable fast, context-aware answers in tools like Cursor.

Scores updated daily from GitHub, PyPI, and npm data. How scores work