lsdefine/attention-is-all-you-need-keras
A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need
This project provides a ready-to-use implementation of the Transformer model, designed for sequence-to-sequence tasks like language translation. It takes text in one language (or sequence type) and outputs it in another. This is for machine learning engineers, data scientists, or researchers who need to apply cutting-edge neural network architectures for natural language processing.
715 stars. No commits in the last 6 months.
Use this if you need a Keras and TensorFlow based implementation of the Transformer architecture for tasks such as machine translation or converting text from one structured format to another.
Not ideal if you are looking for an off-the-shelf application to perform translation without any coding, or if your primary framework is not Keras/TensorFlow.
Stars
715
Forks
187
Language
Python
License
—
Category
Last pushed
Sep 24, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/lsdefine/attention-is-all-you-need-keras"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
philipperemy/keras-attention
Keras Attention Layer (Luong and Bahdanau scores).
tatp22/linformer-pytorch
My take on a practical implementation of Linformer for Pytorch.
ematvey/hierarchical-attention-networks
Document classification with Hierarchical Attention Networks in TensorFlow. WARNING: project is...
datalogue/keras-attention
Visualizing RNNs using the attention mechanism
thushv89/attention_keras
Keras Layer implementation of Attention for Sequential models