rasbt/faster-pytorch-blog
Outlining techniques for improving the training performance of your PyTorch model without compromising its accuracy
This project provides practical methods to significantly speed up the training of your PyTorch deep learning models without losing accuracy. It takes your existing PyTorch model and training scripts as input, and outputs a faster-training version of your model. Data scientists and machine learning engineers who work with PyTorch models will find this useful.
128 stars. No commits in the last 6 months.
Use this if you are a data scientist or machine learning engineer experiencing slow training times with your PyTorch models and want to optimize performance.
Not ideal if you are looking for guidance on initial model building or improving model accuracy rather than training speed.
Stars
128
Forks
16
Language
Python
License
—
Category
Last pushed
Apr 03, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rasbt/faster-pytorch-blog"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
openvinotoolkit/nncf
Neural Network Compression Framework for enhanced OpenVINO™ inference
huggingface/optimum
🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers...
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
huggingface/optimum-intel
🤗 Optimum Intel: Accelerate inference with Intel optimization tools
eole-nlp/eole
Open language modeling toolkit based on PyTorch