pollockjj/ComfyUI-MultiGPU

This custom_node for ComfyUI adds one-click "Virtual VRAM" for any UNet and CLIP loader as well MultiGPU integration in WanVideoWrapper, managing the offload/Block Swap of layers to DRAM *or* VRAM to maximize the latent space of your card. Also includes nodes for directly loading entire components (UNet, CLIP, VAE) onto the device you choose

62
/ 100
Established

This tool helps AI artists and researchers using ComfyUI maximize their GPU's potential for generating images and videos. It takes large AI models (like Stable Diffusion checkpoints, UNets, VAEs, or CLIPs) as input and intelligently distributes parts of them across your GPU's VRAM, other GPUs, or even your system's main RAM. This allows you to generate larger images, longer videos, or run more complex workflows without hitting VRAM limits, freeing up your main GPU for actual computation.

823 stars. Actively maintained with 7 commits in the last 30 days.

Use this if you are an AI artist or researcher using ComfyUI and frequently encounter 'out of memory' errors when trying to generate large images, long videos, or complex AI art with large models, or if you want to optimize how your multiple GPUs work together.

Not ideal if you are not using ComfyUI for generative AI tasks, or if you already have ample GPU VRAM and are not encountering memory constraints during your image or video generation workflows.

AI Art Generation ComfyUI Workflows Generative AI VRAM Management Multi-GPU Optimization
No Package No Dependents
Maintenance 20 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

823

Forks

62

Language

Python

License

GPL-3.0

Last pushed

Mar 17, 2026

Commits (30d)

7

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/pollockjj/ComfyUI-MultiGPU"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.