John-Wendell/Attention-MoA
Official code of Attention-MoA: Enhancing Mixture-of-Agents via Inter-Agent Semantic Attention and Deep Residual Synthesis
This project helps AI researchers and developers evaluate the performance of different large language models (LLMs) and Mixture-of-Agents (MoA) systems. It takes as input various LLMs, both large-scale and small-scale, and outputs performance metrics on benchmarks like AlpacaEval 2.0, MT-Bench, and FLASK. It's designed for those who develop or integrate advanced AI models and need to rigorously compare their effectiveness.
Use this if you are developing or fine-tuning large language models and need a robust framework to compare their performance against established benchmarks and other multi-agent systems.
Not ideal if you are a business user looking for a ready-to-use application, or if you are not deeply involved in AI model development and evaluation.
Stars
24
Forks
—
Language
Python
License
—
Category
Last pushed
Jan 27, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/John-Wendell/Attention-MoA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google-deepmind/concordia
A library for generative social simulation
Mai-xiyu/Minecraft_AI
AI Play Minecraft
mikelma/craftium
A framework for creating rich, 3D, Minecraft-like single and multi-agent environments for AI...
cocacola-lab/MineLand
Simulating Large-Scale Multi-Agent Interactions with Limited Multimodal Senses and Physical Needs
rezaho/MARSYS
Multi-Agent Reasoning Systems