empirical-run/empirical

Test and evaluate LLMs and model configurations, across all the scenarios that matter for your application

38
/ 100
Emerging

This tool helps you quickly evaluate and compare different Large Language Models (LLMs) and their settings for specific tasks, like extracting information or generating text. You input your test data and the LLMs you want to compare, and it provides a web interface to see their outputs side-by-side, score their performance, and quickly iterate on improvements. It's designed for developers building applications powered by LLMs.

167 stars. No commits in the last 6 months.

Use this if you are developing an application that relies on LLMs and need a structured way to test and compare different models or configurations against your specific data and desired outcomes.

Not ideal if you are an end-user simply looking to use an existing LLM application without needing to evaluate or configure the underlying models.

LLM-evaluation application-development model-comparison AI-testing natural-language-processing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

167

Forks

12

Language

TypeScript

License

MIT

Last pushed

Aug 20, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/empirical-run/empirical"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.