ElevenLiy/MATEval

MATEval is the first multi-agent framework simulating human collaborative discussion for open-ended text evaluation.

32
/ 100
Emerging

This tool helps researchers and content quality assurance teams accurately assess the quality of open-ended text generated by Large Language Models (LLMs). You input the LLM-generated text, and it outputs a detailed, explainable evaluation report highlighting up to five types of errors with human-level accuracy. It’s designed for anyone needing to verify the reliability and quality of AI-generated content in a structured way.

No commits in the last 6 months.

Use this if you need to thoroughly and objectively evaluate open-ended text from LLMs, detecting specific errors and getting clear explanations for the assessment.

Not ideal if you're looking for a simple, quick pass/fail grade without needing detailed error analysis or if your primary need is for short, closed-ended text evaluations.

LLM-evaluation content-quality-assurance AI-content-vetting generative-AI-testing text-analysis
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

28

Forks

2

Language

Python

License

MIT

Last pushed

May 28, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ElevenLiy/MATEval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.