thu-nics/MoA
[CoLM'25] The official implementation of the paper
This project helps optimize the performance of large language models (LLMs) when processing very long texts. It takes an existing LLM and automatically configures its internal 'attention' mechanisms to be more efficient. The output is a more efficient LLM that uses less GPU memory and generates responses much faster, without losing accuracy. This is designed for LLM developers or ML engineers who deploy and manage LLMs in production environments.
156 stars.
Use this if you are deploying or managing large language models that need to process lengthy inputs and you want to reduce computational costs and inference latency while maintaining accuracy.
Not ideal if you are a general user of an LLM and do not have access to or expertise in modifying its underlying architecture or deployment environment.
Stars
156
Forks
8
Language
Python
License
MIT
Category
Last pushed
Jan 14, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/thu-nics/MoA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
EfficientMoE/MoE-Infinity
PyTorch library for cost-effective, fast and easy serving of MoE models.
raymin0223/mixture_of_recursions
Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation...
AviSoori1x/makeMoE
From scratch implementation of a sparse mixture of experts language model inspired by Andrej...
jaisidhsingh/pytorch-mixtures
One-stop solutions for Mixture of Expert modules in PyTorch.
CASE-Lab-UMD/Unified-MoE-Compression
The official implementation of the paper "Towards Efficient Mixture of Experts: A Holistic Study...