nv-legate/multimesh-jax
PjRt plugin and Python APIs for MPMD workflows in Jax
This framework helps machine learning engineers efficiently train large models, like transformer networks, across multiple GPUs. It takes your existing JAX computations and automatically orchestrates them across different GPU setups, optimizing for speed and resource use. The result is faster model training and inference, especially for complex deep learning architectures.
No commits in the last 6 months.
Use this if you are developing or training large-scale machine learning models in JAX and need to improve performance by distributing computations across multiple GPUs efficiently.
Not ideal if you are working with small models that don't require distributed computing, or if you are not using JAX as your primary machine learning framework.
Stars
8
Forks
—
Language
C++
License
Apache-2.0
Category
Last pushed
Aug 04, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/nv-legate/multimesh-jax"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
adithya-s-k/AI-Engineering.academy
Mastering Applied AI, One Concept at a Time
jax-ml/jax-llm-examples
Minimal yet performant LLM examples in pure JAX
young-geng/scalax
A simple library for scaling up JAX programs
riyanshibohra/TuneKit
Upload your data → Get a fine-tuned SLM. Free.