empirical-run/empirical
Test and evaluate LLMs and model configurations, across all the scenarios that matter for your application
This tool helps you quickly evaluate and compare different Large Language Models (LLMs) and their settings for specific tasks, like extracting information or generating text. You input your test data and the LLMs you want to compare, and it provides a web interface to see their outputs side-by-side, score their performance, and quickly iterate on improvements. It's designed for developers building applications powered by LLMs.
167 stars. No commits in the last 6 months.
Use this if you are developing an application that relies on LLMs and need a structured way to test and compare different models or configurations against your specific data and desired outcomes.
Not ideal if you are an end-user simply looking to use an existing LLM application without needing to evaluate or configure the underlying models.
Stars
167
Forks
12
Language
TypeScript
License
MIT
Category
Last pushed
Aug 20, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/empirical-run/empirical"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
open-compass/opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral,...
IBM/unitxt
🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the...
lean-dojo/LeanDojo
Tool for data extraction and interacting with Lean programmatically.
GoodStartLabs/AI_Diplomacy
Frontier Models playing the board game Diplomacy.
google/litmus
Litmus is a comprehensive LLM testing and evaluation tool designed for GenAI Application...