Rishit-dagli/Fast-Transformer

An implementation of Additive Attention

51
/ 100
Established

This is a developer tool that provides a TensorFlow implementation of the Fastformer model, which uses additive attention for efficient processing of long text sequences. It takes long text as input and outputs processed sequences, enabling faster and more effective text modeling. Machine learning engineers and researchers working on natural language processing tasks would use this.

148 stars. No commits in the last 6 months. Available on PyPI.

Use this if you are a machine learning engineer or researcher building models that need to process very long text sequences efficiently using TensorFlow.

Not ideal if you are looking for an out-of-the-box solution for text analysis or if you are not comfortable working with TensorFlow and deep learning model implementations.

natural-language-processing deep-learning large-language-models text-modeling machine-learning-engineering
Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 16 / 25

How are scores calculated?

Stars

148

Forks

22

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Feb 15, 2022

Commits (30d)

0

Dependencies

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Rishit-dagli/Fast-Transformer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.