legacyai/tf-transformers
State of the art faster Transformer with Tensorflow 2.0 ( NLP, Computer Vision, Audio ).
This project helps machine learning engineers or researchers build and deploy advanced AI models for text, image, and soon, audio processing tasks. It takes raw data (text, images, audio) and allows you to create high-performance models for things like translating languages, classifying images, or generating text. The primary users are machine learning practitioners who work with TensorFlow 2.0 and need to implement Transformer-based architectures efficiently.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher developing advanced AI models with TensorFlow 2.0 and require significantly faster processing and deployment of Transformer architectures for natural language processing, computer vision, or audio tasks.
Not ideal if you are new to machine learning frameworks or prefer a higher-level, more abstracted interface for basic model training without needing deep control over Transformer-specific optimizations.
Stars
85
Forks
3
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Mar 16, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/legacyai/tf-transformers"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
xv44586/toolkit4nlp
transformers implement (architecture, task example, serving and more)
luozhouyang/transformers-keras
Transformer-based models implemented in tensorflow 2.x(using keras).
ufal/neuralmonkey
An open-source tool for sequence learning in NLP built on TensorFlow.
graykode/xlnet-Pytorch
Simple XLNet implementation with Pytorch Wrapper
uzaymacar/attention-mechanisms
Implementations for a family of attention mechanisms, suitable for all kinds of natural language...