BrewLLM/brewval
Evaluate prompts for LLM applications
This tool helps developers working with Large Language Models (LLMs) compare how different prompts perform across various LLM providers and models. You input a prompt template and a set of test cases with expected outputs. The tool then runs your prompt with each test case against the specified LLMs and provides an accuracy score for each model, helping you select the best LLM and prompt combination for your application.
No commits in the last 6 months.
Use this if you need to systematically test and compare the effectiveness of different prompts and LLMs for a specific natural language task.
Not ideal if you are not a developer and don't work directly with LLM providers, prompts, and code.
Stars
8
Forks
—
Language
Python
License
—
Category
Last pushed
Feb 23, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/BrewLLM/brewval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
microsoft/promptbench
A unified evaluation framework for large language models
uptrain-ai/uptrain
UpTrain is an open-source unified platform to evaluate and improve Generative AI applications....
levitation-opensource/Manipulative-Expression-Recognition
MER is a software that identifies and highlights manipulative communication in text from human...
microsoftarchive/promptbench
A unified evaluation framework for large language models
gabe-mousa/Apolien
AI Safety Evaluation Library