lotus-wisdom-mcp and mcp-structured-thinking

Both tools are Model Context Protocol (MCP) servers, with the first (linxule/lotus-wisdom-mcp) providing a specific implementation for problem-solving using the Lotus Sutra's wisdom framework, compatible with various MCP clients, while the second (Promptly-Technologies-LLC/mcp-structured-thinking) focuses on LLMs programmatically constructing mind maps with enforced metacognitive self-reflection, making them complementary in the sense that they both leverage the MCP protocol for structured reasoning but target different conceptual frameworks and client types.

lotus-wisdom-mcp
57
Established
Maintenance 10/25
Adoption 6/25
Maturity 25/25
Community 16/25
Maintenance 0/25
Adoption 7/25
Maturity 16/25
Community 16/25
Stars: 21
Forks: 6
Downloads:
Commits (30d): 0
Language: JavaScript
License: MIT
Stars: 26
Forks: 7
Downloads:
Commits (30d): 0
Language: TypeScript
License: MIT
No risk flags
Stale 6m No Package No Dependents

About lotus-wisdom-mcp

linxule/lotus-wisdom-mcp

MCP server for structured problem-solving using the Lotus Sutra's wisdom framework. Beautiful visualizations, multiple thinking approaches, compatible with various MCP clients (e.g., Claude Desktop, Cursor, Cherry Studio).

This tool helps you tackle complex problems by guiding you through a structured, contemplative thought process inspired by the Lotus Sutra. You input your problem and follow a series of thinking steps, receiving back a multi-faceted understanding and integrated insights. It's designed for anyone seeking a deeper, more intuitive approach to problem-solving, particularly those working with AI assistants like Claude or ChatGPT.

problem-solving contemplation critical-thinking personal-development decision-making

About mcp-structured-thinking

Promptly-Technologies-LLC/mcp-structured-thinking

A TypeScript Model Context Protocol (MCP) server to allow LLMs to programmatically construct mind maps to explore an idea space, with enforced "metacognitive" self-reflection

This tool helps Large Language Models (LLMs) systematically explore complex ideas by constructing and managing an evolving mind map. It takes an LLM's raw thoughts and organizes them into stages, assigns quality scores, and enables parallel exploration of different lines of reasoning. The primary users are developers or researchers working with LLMs who need to improve the LLM's ability to structure its thinking process.

LLM application development AI workflow orchestration cognitive architecture prompt engineering

Scores updated daily from GitHub, PyPI, and npm data. How scores work