jaketae/fnet
PyTorch implementation of FNet: Mixing Tokens with Fourier transforms
This project helps machine learning engineers and researchers by providing a PyTorch implementation of FNet. It takes in sequence data, similar to what a Transformer would process, and outputs model predictions with significantly faster computation. If you work with large language models or other sequence-to-sequence tasks, this offers a more efficient alternative to traditional Transformer architectures.
No commits in the last 6 months.
Use this if you are a machine learning practitioner looking for a faster, more computationally efficient deep learning model for sequence processing tasks than a standard Transformer.
Not ideal if you need a pre-trained model for immediate use without any model architecture modifications or are not familiar with deep learning frameworks like PyTorch.
Stars
29
Forks
7
Language
Python
License
MIT
Category
Last pushed
May 17, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/jaketae/fnet"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
xv44586/toolkit4nlp
transformers implement (architecture, task example, serving and more)
luozhouyang/transformers-keras
Transformer-based models implemented in tensorflow 2.x(using keras).
ufal/neuralmonkey
An open-source tool for sequence learning in NLP built on TensorFlow.
graykode/xlnet-Pytorch
Simple XLNet implementation with Pytorch Wrapper
uzaymacar/attention-mechanisms
Implementations for a family of attention mechanisms, suitable for all kinds of natural language...