alibaba/easydist

Automated Parallelization System and Infrastructure for Multiple Ecosystems

35
/ 100
Emerging

This tool helps machine learning engineers and researchers speed up their model training and inference. By simply adding a decorator to their existing Python code for PyTorch or JAX, it automatically distributes the computational load across multiple processing units, transforming a slow, single-device operation into a fast, parallel one without extensive manual re-coding.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher struggling with long training times or slow inference for large models in PyTorch or JAX, and you want to utilize more computing power efficiently with minimal code changes.

Not ideal if you are not working with large-scale machine learning models or if your primary frameworks are not PyTorch or JAX.

deep-learning model-training ml-infrastructure high-performance-computing distributed-ml
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

82

Forks

7

Language

Python

License

Apache-2.0

Last pushed

Nov 19, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/alibaba/easydist"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.