hev/vibecheck
Fast and intuitive evals on any LLM
This tool helps AI engineers quickly test and refine how their Large Language Models (LLMs) respond to various prompts. You provide prompts and expected outputs in a simple YAML file, run the evaluation, and get immediate feedback on how well your model performs against your criteria. It's designed for anyone building or customizing LLM applications to ensure their models behave as expected.
Use this if you need a rapid way to test and iterate on the quality and reliability of your LLM's responses, whether for a chatbot, content generator, or other AI-powered feature.
Not ideal if you're a general user simply looking to chat with an LLM without needing to evaluate or customize its behavior.
Stars
18
Forks
—
Language
TypeScript
License
—
Category
Last pushed
Jan 14, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/hev/vibecheck"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
BloopAI/vibe-kanban
Get 10X more out of Claude Code, Codex or any coding agent
filipecalegario/awesome-vibe-coding
A curated list of vibe coding references, collaborating with AI to write code.
cyhhao/vibe-remote
Your AI coding army, commanded from Slack/Discord/Lark. Stream Claude Code, OpenCode, or Codex...
xin-lai/CodeSpirit
CodeSpirit is a revolutionary full-stack low-code + AI development framework that achieves...
bigdevsoon/100-days-of-code
100 Days of Code | Daily Challenges | Beautifully Crafted Designs | Created for...