uxlfoundation/oneCCL

oneAPI Collective Communications Library (oneCCL)

60
/ 100
Established

This tool helps machine learning engineers and researchers accelerate the training of deep learning models across multiple processors or machines. It takes a deep learning model and training data, efficiently distributing the communication tasks to speed up the learning process, and outputs a faster-trained model. It is designed for those working with large-scale distributed deep learning.

257 stars.

Use this if you are a machine learning engineer or researcher looking to significantly reduce the training time of your deep learning models by distributing the workload efficiently across multiple computational devices or nodes.

Not ideal if you are working with small datasets or single-device training, as the overhead of distributed communication may not provide a significant benefit.

deep-learning-training distributed-ai machine-learning-optimization gpu-acceleration high-performance-computing
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 24 / 25

How are scores calculated?

Stars

257

Forks

94

Language

C++

License

Last pushed

Feb 04, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/uxlfoundation/oneCCL"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.