ravenprotocol/ravnest
Decentralized Asynchronous Training on Heterogeneous Devices
This helps machine learning practitioners and researchers train complex deep learning models using a network of diverse, consumer-grade computers. You feed it your large datasets and model architectures, and it efficiently distributes the training process across available machines, even if they're connected via the internet. It simplifies managing distributed training to produce trained deep learning models faster.
Use this if you need to train large, sophisticated deep learning models but lack access to a dedicated, homogeneous supercomputing cluster.
Not ideal if you are working with small datasets or simpler models that can be trained efficiently on a single machine or a standard GPU.
Stars
10
Forks
2
Language
Python
License
MIT
Category
Last pushed
Nov 11, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ravenprotocol/ravnest"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
deepspeedai/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference...
helmholtz-analytics/heat
Distributed tensors and Machine Learning framework with GPU and MPI acceleration in Python
hpcaitech/ColossalAI
Making large AI models cheaper, faster and more accessible
horovod/horovod
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
bsc-wdc/dislib
The Distributed Computing library for python implemented using PyCOMPSs programming model for HPC.