nv-legate/multimesh-jax

PjRt plugin and Python APIs for MPMD workflows in Jax

21
/ 100
Experimental

This framework helps machine learning engineers efficiently train large models, like transformer networks, across multiple GPUs. It takes your existing JAX computations and automatically orchestrates them across different GPU setups, optimizing for speed and resource use. The result is faster model training and inference, especially for complex deep learning architectures.

No commits in the last 6 months.

Use this if you are developing or training large-scale machine learning models in JAX and need to improve performance by distributing computations across multiple GPUs efficiently.

Not ideal if you are working with small models that don't require distributed computing, or if you are not using JAX as your primary machine learning framework.

deep-learning machine-learning-engineering model-training GPU-acceleration distributed-computing
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 4 / 25
Maturity 15 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

C++

License

Apache-2.0

Category

llm-fine-tuning

Last pushed

Aug 04, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/nv-legate/multimesh-jax"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.