tatp22/linformer-pytorch
My take on a practical implementation of Linformer for Pytorch.
This project offers a highly efficient way to process extremely long text sequences, like entire books or extensive codebases, using a specialized type of neural network. It takes in lengthy text or sequential data and produces analyzed outputs, such as predictions for the next word or complex feature representations, much faster than traditional methods. Data scientists and machine learning engineers working with large-scale natural language processing or sequential data tasks will find this particularly useful.
422 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.
Use this if you need to build language models or sequence processors that can handle input sequences of millions of tokens without prohibitive computational costs.
Not ideal if your primary concern is interpretability or if you are working with very short sequences where the performance gains of linear attention are negligible.
Stars
422
Forks
37
Language
Python
License
MIT
Category
Last pushed
Jul 27, 2022
Commits (30d)
0
Dependencies
2
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tatp22/linformer-pytorch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
philipperemy/keras-attention
Keras Attention Layer (Luong and Bahdanau scores).
datalogue/keras-attention
Visualizing RNNs using the attention mechanism
ematvey/hierarchical-attention-networks
Document classification with Hierarchical Attention Networks in TensorFlow. WARNING: project is...
thushv89/attention_keras
Keras Layer implementation of Attention for Sequential models
davidmascharka/tbd-nets
PyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and...