suhjohn/llm-workbench

UI for testing prompts across various datasets locally

35
/ 100
Emerging

This tool helps you efficiently evaluate how different large language model (LLM) prompts perform. You input a prompt template with placeholders and a dataset containing various values for those placeholders, then test it across different LLM models and providers. This is ideal for AI product managers, prompt engineers, or anyone designing applications that rely on LLM outputs.

No commits in the last 6 months.

Use this if you need a systematic way to compare LLM responses to various inputs and prompt versions without writing code for each test.

Not ideal if you are looking for an automated prompt generation tool or a comprehensive abstraction library for LLM interaction.

LLM-evaluation prompt-engineering AI-product-development chatbot-testing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

13

Forks

3

Language

TypeScript

License

MIT

Last pushed

Nov 02, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/suhjohn/llm-workbench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.