root-signals/scorable-mcp

MCP for Scorable Evaluation Platform

36
/ 100
Emerging

This project helps developers and AI product managers improve the quality of AI assistant and agent responses. It acts as a bridge, allowing your AI to send its outputs to Scorable's evaluation platform. Your AI assistant will receive feedback on quality criteria like conciseness or relevance, enabling it to refine and improve its answers. This is for professionals building or managing AI agents and needing to ensure their outputs meet specific quality standards.

Use this if you are developing or managing AI assistants and agents and need a way to automatically evaluate and enhance the quality of their responses against predefined criteria.

Not ideal if you are looking for a standalone AI evaluation platform rather than an integration that connects an existing Scorable account to an MCP-compatible AI agent.

AI quality assurance LLM evaluation AI agent development Prompt engineering Generative AI management
No License No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 17 / 25

How are scores calculated?

Stars

11

Forks

8

Language

Python

License

Last pushed

Nov 21, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mcp/root-signals/scorable-mcp"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.