HomebrewML/revlib

Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload

34
/ 100
Emerging

This is a library for machine learning engineers to build deep learning models that use significantly less GPU memory. It takes a standard PyTorch neural network as input and processes it to allow for memory-efficient training, especially beneficial for very deep networks or large batch sizes. This tool is for machine learning practitioners and researchers who are training large models and encountering memory limitations on their GPUs.

132 stars. No commits in the last 6 months.

Use this if you are training deep neural networks with PyTorch and frequently run out of GPU memory, even when using techniques like gradient checkpointing.

Not ideal if your models are small, you aren't experiencing GPU memory constraints, or you are not working within the PyTorch ecosystem.

deep-learning neural-networks GPU-optimization model-training resource-management
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

132

Forks

6

Language

Python

License

BSD-2-Clause

Last pushed

Aug 06, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/HomebrewML/revlib"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.