FluxML/DaggerFlux.jl

Distributed computation of differentiation pipelines to use multiple workers, devices, GPU, etc. since Julia wasn't fast enough already

28
/ 100
Experimental

This tool helps machine learning engineers and researchers accelerate the training of complex deep learning models by distributing computations across multiple processors, devices, or GPUs. You feed it a Flux.jl model, and it processes it in parallel, outputting the results and enabling gradient calculations for optimization faster than if run on a single machine. It's designed for those working with large-scale deep learning tasks in Julia.

No commits in the last 6 months.

Use this if you are training large Flux.jl models in Julia and need to speed up computation by leveraging multiple CPU cores, GPUs, or networked machines.

Not ideal if your deep learning models are small enough to train quickly on a single device or if you are not using Flux.jl and Julia for your machine learning workflows.

deep-learning machine-learning-engineering model-training scientific-computing high-performance-computing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 4 / 25

How are scores calculated?

Stars

67

Forks

2

Language

Julia

License

Last pushed

Sep 11, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/FluxML/DaggerFlux.jl"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.