BlackHC/toma
Helps you write algorithms in PyTorch that adapt to the available (CUDA) memory
When running deep learning models in PyTorch, you often encounter 'out-of-memory' errors, especially when using GPUs. This tool automatically adjusts the size of the data batches your model processes, or the chunks it operates on, to fit within the available memory. It takes your PyTorch code and, if it fails due to memory limits, retries with smaller data batches until it succeeds. Machine learning engineers and researchers who train or run inference on large models will find this useful.
437 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.
Use this if you are frequently encountering CUDA out-of-memory errors when running PyTorch models and want an automated way to adapt your batch or chunk sizes.
Not ideal if your operations are not memory-intensive, as there is a small overhead involved in the memory adaptation process.
Stars
437
Forks
10
Language
Python
License
MIT
Category
Last pushed
Aug 29, 2024
Commits (30d)
0
Dependencies
2
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/BlackHC/toma"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mrdbourke/pytorch-deep-learning
Materials for the Learn PyTorch for Deep Learning: Zero to Mastery course.
xl0/lovely-tensors
Tensors, for human consumption
stared/livelossplot
Live training loss plot in Jupyter Notebook for Keras, PyTorch and others
dataflowr/notebooks
code for deep learning courses
dvgodoy/PyTorchStepByStep
Official repository of my book: "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide"