Transformer-Implementations and ViT_PyTorch
About Transformer-Implementations
UdbhavPrasad072300/Transformer-Implementations
Library - Vanilla, ViT, DeiT, BERT, GPT
This project offers pre-built transformer models like BERT, GPT, and Vision Transformers (ViT, DeiT) to help machine learning engineers and researchers quickly implement advanced AI capabilities. It takes raw data such as text for language tasks or image tensors for computer vision, and outputs trained models or predictions. The ideal user is a machine learning practitioner looking to apply state-of-the-art transformer architectures.
About ViT_PyTorch
godofpdog/ViT_PyTorch
This is a simple PyTorch implementation of Vision Transformer (ViT) described in the paper "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale"
This project helps machine learning engineers and researchers quickly set up and train a Vision Transformer (ViT) model for image classification tasks. You input a dataset of images, and it outputs a trained model capable of categorizing new images. This is for professionals building advanced computer vision systems.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work