suhjohn/llm-workbench
UI for testing prompts across various datasets locally
This tool helps you efficiently evaluate how different large language model (LLM) prompts perform. You input a prompt template with placeholders and a dataset containing various values for those placeholders, then test it across different LLM models and providers. This is ideal for AI product managers, prompt engineers, or anyone designing applications that rely on LLM outputs.
No commits in the last 6 months.
Use this if you need a systematic way to compare LLM responses to various inputs and prompt versions without writing code for each test.
Not ideal if you are looking for an automated prompt generation tool or a comprehensive abstraction library for LLM interaction.
Stars
13
Forks
3
Language
TypeScript
License
MIT
Category
Last pushed
Nov 02, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/suhjohn/llm-workbench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
genieincodebottle/schemalock
LLM output contract testing CLI, define what your pipeline must return, test it against any...
antsanchez/prompto
Interact with various LLMs in your browser (LangChain.js, Angular)
Coolhand-Labs/coolhand-ruby
Zero-config LLM cost & quality monitoring for Ruby apps - automatically log AI API calls and...
joshualamerton/prompt-trace
Prompt and response tracing for LLM workflows
atjsh/llmlingua-2-js
JavaScript/TypeScript implementation of LLMLingua-2 (Experimental)