thu-nics/MoA

[CoLM'25] The official implementation of the paper

46
/ 100
Emerging

This project helps optimize the performance of large language models (LLMs) when processing very long texts. It takes an existing LLM and automatically configures its internal 'attention' mechanisms to be more efficient. The output is a more efficient LLM that uses less GPU memory and generates responses much faster, without losing accuracy. This is designed for LLM developers or ML engineers who deploy and manage LLMs in production environments.

156 stars.

Use this if you are deploying or managing large language models that need to process lengthy inputs and you want to reduce computational costs and inference latency while maintaining accuracy.

Not ideal if you are a general user of an LLM and do not have access to or expertise in modifying its underlying architecture or deployment environment.

LLM deployment MLOps model optimization GPU efficiency natural language processing
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

156

Forks

8

Language

Python

License

MIT

Last pushed

Jan 14, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/thu-nics/MoA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.