FluxML/DaggerFlux.jl
Distributed computation of differentiation pipelines to use multiple workers, devices, GPU, etc. since Julia wasn't fast enough already
This tool helps machine learning engineers and researchers accelerate the training of complex deep learning models by distributing computations across multiple processors, devices, or GPUs. You feed it a Flux.jl model, and it processes it in parallel, outputting the results and enabling gradient calculations for optimization faster than if run on a single machine. It's designed for those working with large-scale deep learning tasks in Julia.
No commits in the last 6 months.
Use this if you are training large Flux.jl models in Julia and need to speed up computation by leveraging multiple CPU cores, GPUs, or networked machines.
Not ideal if your deep learning models are small enough to train quickly on a single device or if you are not using Flux.jl and Julia for your machine learning workflows.
Stars
67
Forks
2
Language
Julia
License
—
Category
Last pushed
Sep 11, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/FluxML/DaggerFlux.jl"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
CliMA/Oceananigans.jl
🌊 Julia software for fast, friendly, flexible, ocean-flavored fluid dynamics on CPUs and GPUs
JuliaLang/julia
The Julia Programming Language
WassimTenachi/PhySO
Physical Symbolic Optimization
EnzymeAD/Enzyme.jl
Julia bindings for the Enzyme automatic differentiator
astroautomata/SymbolicRegression.jl
Distributed High-Performance Symbolic Regression in Julia