nict-wisdom/rannc
RaNNC is an automatic parallelization middleware used to train very large-scale neural networks.
This tool helps deep learning researchers and practitioners train extremely large neural networks that typically won't fit on a single GPU's memory. You input your existing PyTorch model code, and it automatically partitions and distributes the model across multiple GPUs for training. This is for machine learning engineers and researchers working with state-of-the-art, very large-scale deep learning models.
No commits in the last 6 months.
Use this if you need to train neural networks with billions of parameters in PyTorch and are encountering GPU memory limits.
Not ideal if your neural network models are small enough to train on a single GPU or if you are not using PyTorch.
Stars
57
Forks
9
Language
C++
License
MIT
Category
Last pushed
Oct 15, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nict-wisdom/rannc"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
apache/tvm
Open Machine Learning Compiler Framework
uxlfoundation/oneDNN
oneAPI Deep Neural Network Library (oneDNN)
Tencent/ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
OpenMined/TenSEAL
A library for doing homomorphic encryption operations on tensors
iree-org/iree-turbine
IREE's PyTorch Frontend, based on Torch Dynamo.