Santosh-Gupta/SpeedTorch

Library for faster pinned CPU <-> GPU transfer in Pytorch

49
/ 100
Emerging

This project helps machine learning engineers and researchers accelerate their deep learning workflows, especially when training models with very large numbers of parameters like embeddings. It speeds up the movement of data (tensors) between the computer's main memory (CPU RAM) and the graphics card's memory (GPU RAM). This means you can train more complex models faster, making better use of your hardware resources.

683 stars. No commits in the last 6 months. Available on PyPI.

Use this if you are training large deep learning models in PyTorch, especially those with numerous embeddings, and are encountering performance bottlenecks due to slow data transfer between CPU and GPU memory.

Not ideal if your deep learning models are small, or if you are not experiencing significant CPU-GPU data transfer bottlenecks in your PyTorch training.

deep-learning neural-networks pytorch-optimization embedding-training gpu-acceleration
Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 14 / 25

How are scores calculated?

Stars

683

Forks

40

Language

Python

License

MIT

Last pushed

Feb 21, 2020

Commits (30d)

0

Dependencies

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/Santosh-Gupta/SpeedTorch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.