skolai/fewbit

Compression schema for gradients of activations in backward pass

37
/ 100
Emerging

This project helps machine learning engineers train very large neural networks more efficiently by reducing the memory required during the backward pass. It optimizes activation functions and linear layers, taking in your existing PyTorch model architecture and outputting a memory-optimized version. The primary users are deep learning practitioners working with models that push the limits of GPU memory.

No commits in the last 6 months.

Use this if you are training large neural networks and frequently encounter out-of-memory errors or want to reduce GPU memory footprint to use larger batch sizes or more complex models.

Not ideal if you are working with small models or datasets where memory efficiency is not a primary concern, or if you need to use highly custom activation functions not included in the library.

deep-learning-training neural-network-optimization GPU-memory-management large-model-training
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

45

Forks

6

Language

Python

License

BSD-3-Clause

Last pushed

Jul 26, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/skolai/fewbit"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.