nanowell/Differential-Transformer-PyTorch
PyTorch implementation of the Differential-Transformer architecture for sequence modeling, specifically tailored as a decoder-only model similar to large language models (LLMs). The architecture incorporates a novel Differential Attention mechanism, Multi-Head structure, RMSNorm, and SwiGLU.
This project offers a PyTorch implementation of the Differential Transformer architecture, designed as a decoder-only model for advanced sequence modeling. It takes sequential data as input and produces new, contextually relevant sequences as output. This tool is intended for researchers and machine learning engineers working on developing or experimenting with large language models.
No commits in the last 6 months.
Use this if you are a machine learning researcher or engineer interested in exploring a novel transformer architecture for sequence generation tasks.
Not ideal if you are looking for a ready-to-use application or a high-level API for deploying pre-trained LLMs without diving into model architecture.
Stars
86
Forks
6
Language
Python
License
MIT
Category
Last pushed
Oct 27, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/nanowell/Differential-Transformer-PyTorch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
huggingface/transformers
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in...
kyegomez/LongNet
Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"
pbloem/former
Simple transformer implementation from scratch in pytorch. (archival, latest version on codeberg)
NVIDIA/FasterTransformer
Transformer related optimization, including BERT, GPT
kyegomez/SimplifiedTransformers
SimplifiedTransformer simplifies transformer block without affecting training. Skip connections,...