jmaczan/torch-webgpu

PyTorch compiler and WebGPU runtime

35
/ 100
Emerging

This project helps machine learning engineers and researchers run PyTorch models, especially large language models (LLMs), on hardware that supports WebGPU. You provide your existing PyTorch model code, and it compiles and runs it efficiently using WebGPU, which is broadly supported across various devices and browsers. The result is your model executing and producing outputs without needing specialized GPU backends.

Available on PyPI.

Use this if you are a machine learning engineer or researcher who needs to deploy PyTorch models, particularly large language models, across a wide range of hardware using WebGPU as a unified runtime.

Not ideal if your primary concern is absolute peak performance on highly specialized hardware like NVIDIA CUDA GPUs, as current performance is not yet fully optimized.

machine-learning-deployment large-language-models model-inference cross-platform-ML ML-compiler
No License
Maintenance 10 / 25
Adoption 5 / 25
Maturity 14 / 25
Community 6 / 25

How are scores calculated?

Stars

14

Forks

1

Language

C++

License

Last pushed

Feb 06, 2026

Commits (30d)

0

Dependencies

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jmaczan/torch-webgpu"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.