uxlfoundation/oneCCL
oneAPI Collective Communications Library (oneCCL)
This tool helps machine learning engineers and researchers accelerate the training of deep learning models across multiple processors or machines. It takes a deep learning model and training data, efficiently distributing the communication tasks to speed up the learning process, and outputs a faster-trained model. It is designed for those working with large-scale distributed deep learning.
257 stars.
Use this if you are a machine learning engineer or researcher looking to significantly reduce the training time of your deep learning models by distributing the workload efficiently across multiple computational devices or nodes.
Not ideal if you are working with small datasets or single-device training, as the overhead of distributed communication may not provide a significant benefit.
Stars
257
Forks
94
Language
C++
License
—
Category
Last pushed
Feb 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/uxlfoundation/oneCCL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
iree-org/iree
A retargetable MLIR-based machine learning compiler and runtime toolkit.
brucefan1983/GPUMD
Graphics Processing Units Molecular Dynamics
uxlfoundation/oneDAL
oneAPI Data Analytics Library (oneDAL)
rapidsai/cuml
cuML - RAPIDS Machine Learning Library
NVIDIA/cutlass
CUDA Templates and Python DSLs for High-Performance Linear Algebra