jaketae/alibi
PyTorch implementation of Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
This is a tool for machine learning researchers and engineers working on natural language processing. It helps you build Transformer models that can process longer text sequences at test time than they were trained on, without sacrificing performance. You provide the model with text data for training, and it outputs a more robust Transformer model capable of handling varied input lengths.
No commits in the last 6 months.
Use this if you need to train a Transformer model on shorter text sequences but want it to perform well on much longer sequences during real-world use.
Not ideal if you are looking for a ready-to-use application and not a foundational component for building custom NLP models.
Stars
33
Forks
5
Language
Python
License
MIT
Category
Last pushed
Dec 29, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jaketae/alibi"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features...
kanishkamisra/minicons
Utility for behavioral and representational analyses of Language Models
lucidrains/simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
lucidrains/dreamer4
Implementation of Danijar's latest iteration for his Dreamer line of work
Nicolepcx/Transformers-in-Action
This is the corresponding code for the book Transformers in Action