John-Wendell/Attention-MoA

Official code of Attention-MoA: Enhancing Mixture-of-Agents via Inter-Agent Semantic Attention and Deep Residual Synthesis

21
/ 100
Experimental

This project helps AI researchers and developers evaluate the performance of different large language models (LLMs) and Mixture-of-Agents (MoA) systems. It takes as input various LLMs, both large-scale and small-scale, and outputs performance metrics on benchmarks like AlpacaEval 2.0, MT-Bench, and FLASK. It's designed for those who develop or integrate advanced AI models and need to rigorously compare their effectiveness.

Use this if you are developing or fine-tuning large language models and need a robust framework to compare their performance against established benchmarks and other multi-agent systems.

Not ideal if you are a business user looking for a ready-to-use application, or if you are not deeply involved in AI model development and evaluation.

AI Model Evaluation Large Language Models Agent Systems Machine Learning Research Performance Benchmarking
No License No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 5 / 25
Community 0 / 25

How are scores calculated?

Stars

24

Forks

Language

Python

License

Last pushed

Jan 27, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/John-Wendell/Attention-MoA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.