douban/paracel
Distributed training framework with parameter server
This framework helps machine learning engineers and researchers efficiently train complex machine learning models like Logistic Regression, SVD, or LDA on very large datasets. It takes a massive dataset and a large set of parameters, distributing them across multiple machines, and outputs a trained model much faster than traditional single-machine methods. It's designed for professionals working with big data and advanced machine learning algorithms.
338 stars. No commits in the last 6 months.
Use this if you need to train machine learning models on datasets so large they don't fit on a single computer or take too long to process.
Not ideal if your datasets are small to medium-sized or you are working with simple models that train quickly on a single machine.
Stars
338
Forks
83
Language
C++
License
—
Category
Last pushed
Dec 09, 2016
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/douban/paracel"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
tensorflow/tensorflow
An Open Source Machine Learning Framework for Everyone
microsoft/tensorwatch
Debugging, monitoring and visualization for Python Machine Learning and Data Science
KomputeProject/kompute
General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics...
hailo-ai/hailort-drivers
The Hailo PCIe driver is required for interacting with a Hailo device over the PCIe interface
NVIDIA/nvshmem
NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM...