fabio-rovai/brain-in-the-fish

Evaluate anything. Predict everything. Hallucinate nothing. — SNN-verified document evaluation & prediction credibility MCP server.

40
/ 100
Emerging

This tool helps professionals like tender evaluators, essay graders, policy reviewers, and clinical report auditors quickly assess the quality and credibility of documents. You input a document, such as an essay, tender response, or clinical report, and it outputs a score, a knowledge graph showing the document's structure, and a clear verdict (CONFIRMED, FLAGGED, or REJECTED) with specific, verifiable evidence. It's designed for anyone who needs to prove a score is fair or verify claims against original text.

Use this if you need an auditable, evidence-backed evaluation of a document's claims, especially when dealing with high-stakes content like proposals, academic submissions, or official reports.

Not ideal if you simply need a general summary or quick sentiment analysis without deep structural verification or a detailed audit trail.

tender-evaluation essay-grading policy-review clinical-reporting claim-verification
No Package No Dependents
Maintenance 13 / 25
Adoption 5 / 25
Maturity 9 / 25
Community 13 / 25

How are scores calculated?

Stars

10

Forks

2

Language

Rust

License

MIT

Last pushed

Mar 28, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mcp/fabio-rovai/brain-in-the-fish"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.