serhiismetanskyi/llm-output-evaluation-with-deepeval
DeepEval LLM quality evaluation tests with LLM-as-a-judge
14
/ 100
Experimental
No License
No Package
No Dependents
Maintenance
13 / 25
Adoption
0 / 25
Maturity
1 / 25
Community
0 / 25
Stars
—
Forks
—
Language
Python
License
—
Category
Last pushed
Mar 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/serhiismetanskyi/llm-output-evaluation-with-deepeval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
eth-sri/matharena
Evaluation of LLMs on latest math competitions
52
tatsu-lab/alpaca_eval
An automatic evaluator for instruction-following language models. Human-validated, high-quality,...
51
HPAI-BSC/TuRTLe
TuRTLe: A Unified Evaluation of LLMs for RTL Generation 🐢 (MLCAD 2025)
50
nlp-uoregon/mlmm-evaluation
Multilingual Large Language Models Evaluation Benchmark
42
haesleinhuepf/human-eval-bia
Benchmarking Large Language Models for Bio-Image Analysis Code Generation
41