0xsomesh/rawbench
RawBench: Powerful, minimal framework for LLM prompt evaluation with YAML configuration, tool execution support, and comprehensive result tracking.
This tool helps AI engineers and prompt developers thoroughly test how well their large language model prompts perform. You provide your prompts, various LLMs, and test cases in a simple configuration file. The tool then runs your tests, measures performance metrics like latency and cost, and generates detailed reports and an interactive dashboard to visualize and compare results across different models and prompt variations.
No commits in the last 6 months.
Use this if you are developing LLM applications and need a streamlined, flexible way to systematically evaluate and compare prompt effectiveness across multiple models and scenarios, especially for agents that use external tools.
Not ideal if you are looking for an automated prompt finetuning solution or an AI judge for evaluating response quality, as these features are currently on the roadmap.
Stars
8
Forks
—
Language
TypeScript
License
MIT
Category
Last pushed
Jul 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/0xsomesh/rawbench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)