lsdefine/attention-is-all-you-need-keras

A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need

43
/ 100
Emerging

This project provides a ready-to-use implementation of the Transformer model, designed for sequence-to-sequence tasks like language translation. It takes text in one language (or sequence type) and outputs it in another. This is for machine learning engineers, data scientists, or researchers who need to apply cutting-edge neural network architectures for natural language processing.

715 stars. No commits in the last 6 months.

Use this if you need a Keras and TensorFlow based implementation of the Transformer architecture for tasks such as machine translation or converting text from one structured format to another.

Not ideal if you are looking for an off-the-shelf application to perform translation without any coding, or if your primary framework is not Keras/TensorFlow.

Machine Translation Natural Language Processing Sequence-to-Sequence Deep Learning Research Text Transformation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 25 / 25

How are scores calculated?

Stars

715

Forks

187

Language

Python

License

Last pushed

Sep 24, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/lsdefine/attention-is-all-you-need-keras"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.