kyegomez/attn_res
A clean, single-file PyTorch implementation of Attention Residuals (Kimi Team, MoonshotAI, 2026), integrated with Grouped Query Attention (GQA), SwiGLU feed-forward networks, and Rotary Position Embeddings (RoPE).
This project provides a clean, single-file implementation of the Attention Residuals mechanism for Transformer models. It allows researchers and AI practitioners to experiment with different ways that information flows between layers in a language model, moving beyond simple additive connections. You input token sequences, and it outputs logits or loss for training, giving you a powerful building block for advanced language model architectures.
Available on PyPI.
Use this if you are a machine learning researcher or engineer working on large language models and want to explore novel architectural components like Attention Residuals to improve model performance or efficiency.
Not ideal if you are looking for a pre-trained, production-ready language model or a high-level API for everyday NLP tasks.
Stars
8
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 16, 2026
Commits (30d)
0
Dependencies
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/attn_res"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features...
kanishkamisra/minicons
Utility for behavioral and representational analyses of Language Models
lucidrains/simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
lucidrains/dreamer4
Implementation of Danijar's latest iteration for his Dreamer line of work
Nicolepcx/Transformers-in-Action
This is the corresponding code for the book Transformers in Action