UdbhavPrasad072300/Transformer-Implementations

Library - Vanilla, ViT, DeiT, BERT, GPT

52
/ 100
Established

This project offers pre-built transformer models like BERT, GPT, and Vision Transformers (ViT, DeiT) to help machine learning engineers and researchers quickly implement advanced AI capabilities. It takes raw data such as text for language tasks or image tensors for computer vision, and outputs trained models or predictions. The ideal user is a machine learning practitioner looking to apply state-of-the-art transformer architectures.

No commits in the last 6 months. Available on PyPI.

Use this if you are a machine learning engineer or researcher who wants to rapidly prototype or deploy solutions for natural language processing or computer vision using established transformer models.

Not ideal if you are a non-technical end-user looking for a ready-to-use application, or if you need to train Vision Transformers on very small datasets without extensive pre-training.

natural-language-processing computer-vision image-classification language-translation deep-learning-research
Stale 6m No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 25 / 25
Community 19 / 25

How are scores calculated?

Stars

69

Forks

18

Language

Jupyter Notebook

License

MIT

Last pushed

Sep 30, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/UdbhavPrasad072300/Transformer-Implementations"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.