siboehm/ShallowSpeed

Small scale distributed training of sequential deep learning models, built on Numpy and MPI.

28
/ 100
Experimental

This project helps machine learning engineers efficiently train deep learning models that process data sequentially, like multilayer perceptrons. You provide your dataset and model architecture, and it outputs a trained model faster by distributing the workload. It's designed for developers building and optimizing these types of models.

163 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer working with sequential deep learning models and need to accelerate training across multiple computing resources.

Not ideal if you are working with complex, non-sequential model architectures or require a production-ready, highly optimized distributed training framework.

deep-learning-training distributed-machine-learning model-optimization neural-network-training
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

163

Forks

9

Language

Python

License

Last pushed

Oct 19, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/siboehm/ShallowSpeed"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.