gmongaras/Cottention_Transformer
Code for the paper "Cottention: Linear Transformers With Cosine Attention"
This project offers an alternative to standard Transformer models for researchers and practitioners working with large language models (LLMs). It helps train and fine-tune BERT and GPT-style models using 'Cottention,' a linear transformer with cosine attention. The input is raw text data, and the output is a trained or fine-tuned language model ready for downstream tasks.
Use this if you are an AI researcher or machine learning engineer looking to experiment with novel transformer architectures for improved efficiency or performance in natural language processing tasks.
Not ideal if you are looking for a plug-and-play solution for everyday NLP tasks without needing to delve into model training or architecture specifics.
Stars
20
Forks
—
Language
Cuda
License
—
Category
Last pushed
Nov 15, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/gmongaras/Cottention_Transformer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features...
kanishkamisra/minicons
Utility for behavioral and representational analyses of Language Models
lucidrains/simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
lucidrains/dreamer4
Implementation of Danijar's latest iteration for his Dreamer line of work
Nicolepcx/Transformers-in-Action
This is the corresponding code for the book Transformers in Action