douban/paracel

Distributed training framework with parameter server

49
/ 100
Emerging

This framework helps machine learning engineers and researchers efficiently train complex machine learning models like Logistic Regression, SVD, or LDA on very large datasets. It takes a massive dataset and a large set of parameters, distributing them across multiple machines, and outputs a trained model much faster than traditional single-machine methods. It's designed for professionals working with big data and advanced machine learning algorithms.

338 stars. No commits in the last 6 months.

Use this if you need to train machine learning models on datasets so large they don't fit on a single computer or take too long to process.

Not ideal if your datasets are small to medium-sized or you are working with simple models that train quickly on a single machine.

distributed-machine-learning big-data-analytics model-training large-scale-optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

338

Forks

83

Language

C++

License

Last pushed

Dec 09, 2016

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/douban/paracel"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.