ComfyUI on Jetson Thor -- OOM since latest apt-update

I’ve been running comfyui workloads on jetson Thor since last week, they reported 100% offload, but ran in GPU time quite well. Now since the latest update, they simply go OOM loading a small 30GB Flux2 FP8. Any ideas on how to resolve?

It’s saying I have a PB of VRAM now instead of 0, which breaks it.

For others this is a ComfyUI bug and I wrote a patch. https://github.com/comfyanonymous/ComfyUI/issues/11332

Hi,

Is there any log shown when you meet the OOM issue?
Thanks.

ComfyUI log, but it’s not really out of memory, it’s a calculation issue.

File "/home/matt/Local/ComfyUI/comfy/ops.py", line 631, in _apply
self.register_parameter(key, torch.nn.Parameter(fn(param), requires_grad=False))
^^^^^^^^^
File "/home/matt/Local/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1357, in convert
return t.to(
^^^^^
File "/home/matt/Local/ComfyUI/comfy/quant_ops.py", line 205, in __torch_dispatch__
return _GENERIC_UTILS[func](func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/matt/Local/ComfyUI/comfy/quant_ops.py", line 321, in generic_to_dtype_layout
return _handle_device_transfer(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/matt/Local/ComfyUI/comfy/quant_ops.py", line 272, in _handle_device_transfer
new_q_data = qt._qdata.to(device=target_device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: Allocation on device

Got an OOM, unloading all loaded models.

Hi,

Please run tegrastats at the same time to check if the device is really running out of memory.

$ sudo tegrastats

The same code runs normally on r38.2.1 (JetPack 7.0GA) but fails with OOM on r38.2.2.
Is that correct?

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.