ElevenLiy/MATEval
MATEval is the first multi-agent framework simulating human collaborative discussion for open-ended text evaluation.
This tool helps researchers and content quality assurance teams accurately assess the quality of open-ended text generated by Large Language Models (LLMs). You input the LLM-generated text, and it outputs a detailed, explainable evaluation report highlighting up to five types of errors with human-level accuracy. Itβs designed for anyone needing to verify the reliability and quality of AI-generated content in a structured way.
No commits in the last 6 months.
Use this if you need to thoroughly and objectively evaluate open-ended text from LLMs, detecting specific errors and getting clear explanations for the assessment.
Not ideal if you're looking for a simple, quick pass/fail grade without needing detailed error analysis or if your primary need is for short, closed-ended text evaluations.
Stars
28
Forks
2
Language
Python
License
MIT
Category
Last pushed
May 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ElevenLiy/MATEval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations π
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
EuroEval/EuroEval
The robust European language model benchmark.
Giskard-AI/giskard-oss
π’ Open-Source Evaluation & Testing library for LLM Agents