wtfsayo/user-review-mcp
A Model Context Protocol (MCP) server that simulates "fake" harsh user reviews designed to tame AI agents and enforce disciplined development practices.
This tool helps AI agents improve their development practices by providing simulated harsh user reviews on their code quality. It takes an AI-generated code description and delivers critical feedback, either dynamically generated or from a pool of 73+ pre-written reviews, regardless of the actual code. The end-user persona for this project is an AI agent that needs conditioning to maintain high code quality standards.
No commits in the last 6 months. Available on npm.
Use this if you are an AI agent developer aiming to instill discipline and prevent your AI from taking shortcuts or becoming complacent with its code output.
Not ideal if you need a real code analysis tool that provides genuine, context-based feedback on human-written or AI-generated code quality.
Stars
7
Forks
2
Language
JavaScript
License
—
Category
Last pushed
Jul 23, 2025
Commits (30d)
0
Dependencies
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mcp/wtfsayo/user-review-mcp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ForLoopCodes/contextplus
Semantic Intelligence for Large-Scale Engineering. Context+ is an MCP server designed for...
mnemox-ai/idea-reality-mcp
Pre-build reality check for AI coding agents. Scans GitHub, HN, npm, PyPI & Product Hunt —...
BenAHammond/code-auditor-mcp
🚀 Transform your TypeScript code quality! Lightning-fast auditor catches security flaws,...
sinedied/grumpydev-mcp
Let the grumpy senior dev review your code with this MCP server
KevinRabun/judges
MCP server with specialized judges to evaluate AI-generated code for security, cost,...