mcp_server_filesystem and mcp-file-context-server

These two Model Context Protocol (MCP) servers are competitors, both aiming to provide file system operations and context to AI assistants and LLMs, albeit with slightly different focuses like secure operations for AI assistants versus advanced caching for code analysis.

mcp_server_filesystem
52
Established
Maintenance 10/25
Adoption 8/25
Maturity 16/25
Community 18/25
Maintenance 2/25
Adoption 7/25
Maturity 16/25
Community 17/25
Stars: 45
Forks: 15
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 36
Forks: 8
Downloads:
Commits (30d): 0
Language: JavaScript
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About mcp_server_filesystem

MarcusJellinghaus/mcp_server_filesystem

MCP File System Server: A secure Model Context Protocol server that provides file operations for AI assistants. Enables Claude and other assistants to safely read, write, and list files in a designated project directory with robust path validation and security controls.

This tool helps software developers and engineers connect their AI assistants, like Claude, directly to their local code projects. It allows the AI to read existing project files, write new code, modify specific sections, and manage files within a designated project directory. The output is an AI-enhanced workflow where developers can use natural language prompts to have the AI directly generate, modify, and organize code in their project files.

software-development AI-assisted-coding code-generation developer-tools devops

About mcp-file-context-server

bsmi021/mcp-file-context-server

A Model Context Protocol (MCP) server that provides file system context to Large Language Models (LLMs). This server enables LLMs to read, search, and analyze code files with advanced caching and real-time file watching capabilities.

This tool helps developers working with Large Language Models (LLMs) to efficiently access and understand their codebase. It takes your project's files and directories as input and allows the LLM to read, search, and analyze code with advanced features like real-time file watching and smart caching. The primary user is a software developer or a prompt engineer looking to improve LLM interactions with code.

software-development code-analysis large-language-models developer-tools prompt-engineering

Scores updated daily from GitHub, PyPI, and npm data. How scores work