DigitalHarborFoundation/FlexEval
FlexEval is an LLM evaluation tool designed for practical quantitative analysis.
This tool helps evaluate the performance of large language models (LLMs) and LLM-powered systems, like chatbots, by designing custom metrics and grading rubrics. You feed in conversation logs or LLM outputs, and it produces quantitative scores and analyses stored in a database. This is for AI/ML engineers, researchers, or product managers who need to assess and compare the quality of different LLM models or system iterations.
No commits in the last 6 months.
Use this if you need a flexible way to quantitatively measure and compare the outputs of LLMs or LLM-driven applications, allowing for custom evaluation criteria and historical monitoring.
Not ideal if you need a simple, pre-configured 'black box' solution for LLM evaluation without any desire to customize metrics or integrate with a development workflow.
Stars
16
Forks
—
Language
Python
License
MIT
Category
Last pushed
Sep 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/DigitalHarborFoundation/FlexEval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
eth-sri/matharena
Evaluation of LLMs on latest math competitions
tatsu-lab/alpaca_eval
An automatic evaluator for instruction-following language models. Human-validated, high-quality,...
HPAI-BSC/TuRTLe
TuRTLe: A Unified Evaluation of LLMs for RTL Generation 🐢 (MLCAD 2025)
nlp-uoregon/mlmm-evaluation
Multilingual Large Language Models Evaluation Benchmark
haesleinhuepf/human-eval-bia
Benchmarking Large Language Models for Bio-Image Analysis Code Generation