alibaba/easydist
Automated Parallelization System and Infrastructure for Multiple Ecosystems
This tool helps machine learning engineers and researchers speed up their model training and inference. By simply adding a decorator to their existing Python code for PyTorch or JAX, it automatically distributes the computational load across multiple processing units, transforming a slow, single-device operation into a fast, parallel one without extensive manual re-coding.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher struggling with long training times or slow inference for large models in PyTorch or JAX, and you want to utilize more computing power efficiently with minimal code changes.
Not ideal if you are not working with large-scale machine learning models or if your primary frameworks are not PyTorch or JAX.
Stars
82
Forks
7
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 19, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/alibaba/easydist"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...