nict-wisdom/rannc

RaNNC is an automatic parallelization middleware used to train very large-scale neural networks.

39
/ 100
Emerging

This tool helps deep learning researchers and practitioners train extremely large neural networks that typically won't fit on a single GPU's memory. You input your existing PyTorch model code, and it automatically partitions and distributes the model across multiple GPUs for training. This is for machine learning engineers and researchers working with state-of-the-art, very large-scale deep learning models.

No commits in the last 6 months.

Use this if you need to train neural networks with billions of parameters in PyTorch and are encountering GPU memory limits.

Not ideal if your neural network models are small enough to train on a single GPU or if you are not using PyTorch.

deep-learning-training neural-networks large-scale-models gpu-computing pytorch
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

57

Forks

9

Language

C++

License

MIT

Last pushed

Oct 15, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nict-wisdom/rannc"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.