shinpr/rashomon
Compare, improve, and verify prompt changes with evidence — not vibes.
This tool helps people who work with AI assistants like Claude to verify if changes to their prompts or 'skills' actually make a difference. You input your existing prompt/skill and your proposed changes, and it runs both versions in isolation to compare the real outputs. This allows you to see concrete evidence of whether your updates led to meaningful improvements, stylistic changes, or no change at all.
Use this if you are a developer, scientist, marketer, or any professional who wants to ensure that improvements to your AI assistant's prompts or skills are based on evidence, not just intuition.
Not ideal if you are looking for a tool to simply rewrite a prompt one time without comparing it to a previous version.
Stars
7
Forks
—
Language
Shell
License
—
Category
Last pushed
Jan 29, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/shinpr/rashomon"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
nidhinjs/prompt-master
A Claude skill that writes the accurate prompts for any AI tool. Zero tokens or credits wasted....
maxvaega/skillkit
Implementing Skills functionality for your agents
telagod/code-abyss
☠️ 一键为 Claude Code / Codex CLI 注入邪修人格与 40+ 安全工程秘典 | npx code-abyss
jorgegorka/ariadna
Ruby on rails meta-prompting, context engineering and spec-driven development system for Claude Code
ursisterbtw/ccprompts
practical claude code commands and subagents