DigitalHarborFoundation/FlexEval

FlexEval is an LLM evaluation tool designed for practical quantitative analysis.

24
/ 100
Experimental

This tool helps evaluate the performance of large language models (LLMs) and LLM-powered systems, like chatbots, by designing custom metrics and grading rubrics. You feed in conversation logs or LLM outputs, and it produces quantitative scores and analyses stored in a database. This is for AI/ML engineers, researchers, or product managers who need to assess and compare the quality of different LLM models or system iterations.

No commits in the last 6 months.

Use this if you need a flexible way to quantitatively measure and compare the outputs of LLMs or LLM-driven applications, allowing for custom evaluation criteria and historical monitoring.

Not ideal if you need a simple, pre-configured 'black box' solution for LLM evaluation without any desire to customize metrics or integrate with a development workflow.

LLM evaluation AI model performance chatbot quality assurance natural language processing conversational AI metrics
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

16

Forks

Language

Python

License

MIT

Last pushed

Sep 19, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/DigitalHarborFoundation/FlexEval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.