UdbhavPrasad072300/Transformer-Implementations
Library - Vanilla, ViT, DeiT, BERT, GPT
This project offers pre-built transformer models like BERT, GPT, and Vision Transformers (ViT, DeiT) to help machine learning engineers and researchers quickly implement advanced AI capabilities. It takes raw data such as text for language tasks or image tensors for computer vision, and outputs trained models or predictions. The ideal user is a machine learning practitioner looking to apply state-of-the-art transformer architectures.
No commits in the last 6 months. Available on PyPI.
Use this if you are a machine learning engineer or researcher who wants to rapidly prototype or deploy solutions for natural language processing or computer vision using established transformer models.
Not ideal if you are a non-technical end-user looking for a ready-to-use application, or if you need to train Vision Transformers on very small datasets without extensive pre-training.
Stars
69
Forks
18
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Sep 30, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/UdbhavPrasad072300/Transformer-Implementations"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related models
jaehyunnn/ViTPose_pytorch
An unofficial implementation of ViTPose [Y. Xu et al., 2022]
tintn/vision-transformer-from-scratch
A Simplified PyTorch Implementation of Vision Transformer (ViT)
icon-lab/ResViT
Official Implementation of ResViT: Residual Vision Transformers for Multi-modal Medical Image Synthesis
gupta-abhay/pytorch-vit
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
NVlabs/GroupViT
Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text...