HomebrewML/revlib
Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload
This is a library for machine learning engineers to build deep learning models that use significantly less GPU memory. It takes a standard PyTorch neural network as input and processes it to allow for memory-efficient training, especially beneficial for very deep networks or large batch sizes. This tool is for machine learning practitioners and researchers who are training large models and encountering memory limitations on their GPUs.
132 stars. No commits in the last 6 months.
Use this if you are training deep neural networks with PyTorch and frequently run out of GPU memory, even when using techniques like gradient checkpointing.
Not ideal if your models are small, you aren't experiencing GPU memory constraints, or you are not working within the PyTorch ecosystem.
Stars
132
Forks
6
Language
Python
License
BSD-2-Clause
Category
Last pushed
Aug 06, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/HomebrewML/revlib"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
explosion/thinc
🔮 A refreshing functional take on deep learning, compatible with your favorite libraries
google-deepmind/optax
Optax is a gradient processing and optimization library for JAX.
patrick-kidger/diffrax
Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable....
google/grain
Library for reading and processing ML training data.
patrick-kidger/equinox
Elegant easy-to-use neural networks + scientific computing in JAX. https://docs.kidger.site/equinox/