plutonium-239/memsave_torch

Lowering PyTorch's Memory Consumption for Selective Differentiation

27
/ 100
Experimental

This package helps deep learning engineers and researchers manage GPU memory more efficiently when training large PyTorch models. It takes existing neural network layers and converts them into memory-saving versions. The output is a functionally identical model that consumes less memory, especially useful when only updating a subset of the model's parameters.

No commits in the last 6 months.

Use this if you are a deep learning practitioner training PyTorch models and frequently encounter 'out of memory' errors, particularly when fine-tuning or using techniques where only certain layers or parameters require gradient computation.

Not ideal if your PyTorch models are small, you always train all parameters, or you are not experiencing memory consumption issues on your current hardware.

deep-learning neural-networks GPU-memory-optimization model-training fine-tuning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

12

Forks

1

Language

Python

License

MIT

Last pushed

Aug 29, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/plutonium-239/memsave_torch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.