parameterlab/MASEval

Multi-Agent LLM Evaluation

55
/ 100
Established

This is for AI researchers and developers who need to compare how well different multi-agent LLM systems perform. It takes your existing agent implementations (from frameworks like AutoGen or LangChain) and runs them through standard benchmarks or your own custom evaluation tasks. The output helps you understand which agent architectures and configurations are most effective for specific challenges.

Used by 1 other package. Available on PyPI.

Use this if you need to objectively evaluate and compare the performance of various multi-agent LLM systems or individual agents using standardized benchmarks.

Not ideal if you're looking for a tool that helps you build or design multi-agent systems, define communication protocols, or turn LLMs into agents.

AI-research LLM-benchmarking agent-system-evaluation multi-agent-development AI-performance-testing
Maintenance 10 / 25
Adoption 7 / 25
Maturity 22 / 25
Community 16 / 25

How are scores calculated?

Stars

18

Forks

7

Language

Python

License

MIT

Last pushed

Mar 12, 2026

Commits (30d)

0

Dependencies

4

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/parameterlab/MASEval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.