ravenprotocol/ravnest

Decentralized Asynchronous Training on Heterogeneous Devices

40
/ 100
Emerging

This helps machine learning practitioners and researchers train complex deep learning models using a network of diverse, consumer-grade computers. You feed it your large datasets and model architectures, and it efficiently distributes the training process across available machines, even if they're connected via the internet. It simplifies managing distributed training to produce trained deep learning models faster.

Use this if you need to train large, sophisticated deep learning models but lack access to a dedicated, homogeneous supercomputing cluster.

Not ideal if you are working with small datasets or simpler models that can be trained efficiently on a single machine or a standard GPU.

deep-learning machine-learning-research distributed-computing neural-network-training ai-development
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

10

Forks

2

Language

Python

License

MIT

Last pushed

Nov 11, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ravenprotocol/ravnest"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.