Pip3 install torch torchvision fails while installing ComfyUi

If you follow comfyUI installation instructions

(comfyui-env) username@sparkai:~$ pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu129
Looking in indexes: https://download.pytorch.org/whl/cu129
Collecting torch
Downloading https://download.pytorch.org/whl/cu129/torch-2.9.0%2Bcu129-cp312-cp312-manylinux_2_28_aarch64.whl.metadata (30 kB)
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
torch from https://download.pytorch.org/whl/cu129/torch-2.9.0%2Bcu129-cp312-cp312-manylinux_2_28_aarch64.whl:
Expected sha256 05df84ccec407908cb70f89d6c2675b8220661f23d7de0cf899f4401f8ab2798
Got 2b0a3a5d37a8d7447e56e7e4e27280f881e805fbae79130fa8874bcfe6eae333

Not sure why this is happening to you as I do not get the error.

Have you tried purging your cache? pip cache purge

Thank you

pip cache purge fixed the problem

I too had problems with installing comfyai according to these instructions:

Everything seemed to install fine up until the server test. Here’s a log excerpt:

(comfyui-env) trelease@spark-993c:~/ComfyUI$ python main.py --listen 0.0.0.0
Checkpoint files will always be loaded safely.
Traceback (most recent call last):
File “/home/trelease/ComfyUI/main.py”, line 149, in
import execution
File “/home/trelease/ComfyUI/execution.py”, line 15, in
import comfy.model_management
File “/home/trelease/ComfyUI/comfy/model_management.py”, line 237, in
total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
^^^^^^^^^^^^^^^^^^
File “/home/trelease/ComfyUI/comfy/model_management.py”, line 187, in get_torch_device
return torch.device(torch.cuda.current_device())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/trelease/comfyui-env/lib/python3.12/site-packages/torch/cuda/init.py”, line 1069, in current_device
_lazy_init()
File “/home/trelease/comfyui-env/lib/python3.12/site-packages/torch/cuda/init.py”, line 403, in _lazy_init
raise AssertionError(“Torch not compiled with CUDA enabled”)
AssertionError: Torch not compiled with CUDA enabled
(comfyui-env) trelease@spark-993c:~/ComfyUI$

~

I’ve got a ‘week 0” (reserved) Spark Founder’s Edition that was shipped the day after Jensen delivered Elon’s ;) …

The USB drive OS setup was fast and painless, and I’ve been intense “vibing” with the openwebui/ollama container since 10/17. The OS apt updates are current.

I’ve read elsewhere about “Torch not compiled with CUDA enabled” issues, but found no solutions relevant to Spark. This Forum thread seemed the only one related to the issue(s).

Adding together all that I read, the issue seems to turn around the comfyui env install using a ’deficient’ CUDA 12.9 image, whereas the base is CUDA 13+…

As an experienced Jetson developer, I’m nonetheless fairly reluctant to start hacking on the Spark install code, even if another NV dev or two reported comfyui works fine with CUDA13…

Is there any straightforward fix for this?

Expectation-wise, I was hoping the ‘production platform’ Spark Playbooks and buiild.nvidia.. wouldn’t present day0 glitches. Guess I was spoiled immediately by the cool openwebui-ollama Docker container ;))

(On the AGX Thor side, we’re still waiting for in house NV JP 7.1 upgrades to solve a knotty terminal processes/processor lag problems.)

Oh well.

Best regards!

Sounds like you did not install pytorch correctly. Did you run this command exactly? pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu129

That’s exactly what I did, according to the scripts on the NV page originally referenced. Would you recommend I repeat those steps, perhaps after I delete the existing cu129 download?

Thank you!

RBT

Yeah try it again, maybe it didn’t resolve correctly

Will do. Thanks!

RBT

I’m trying the same tutorial I guess: Comfy UI | DGX Spark
And i’m not able to run it, when starting it with the command: “python main.py --listen 0.0.0.0“, I’ve got the error:

Checkpoint files will always be loaded safely.
Traceback (most recent call last):
File “/home/admin/ComfyUI/main.py”, line 149, in
import execution
File “/home/admin/ComfyUI/execution.py”, line 15, in
import comfy.model_management
File “/home/admin/ComfyUI/comfy/model_management.py”, line 238, in
total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
^^^^^^^^^^^^^^^^^^
File “/home/admin/ComfyUI/comfy/model_management.py”, line 188, in get_torch_device
return torch.device(torch.cuda.current_device())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/admin/comfyui-env/lib/python3.12/site-packages/torch/cuda/init.py”, line 1069, in current_device
_lazy_init()
File “/home/admin/comfyui-env/lib/python3.12/site-packages/torch/cuda/init.py”, line 403, in _lazy_init
raise AssertionError(“Torch not compiled with CUDA enabled”)
AssertionError: Torch not compiled with CUDA enabled

The only difference that I notice, is that the tutorial is updated for CU13, but even trying to install as suggested the 12.9, is not working for me.

Any suggestion?

I see exactly the same error message as @Andrea.Padovani when following the same guide.

Also:

(comfyui-env) $ python
Python 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(f"PyTorch version: {torch.__version__}")
PyTorch version: 2.9.0+cpu
>>> print(f"CUDA available: {torch.cuda.is_available()}")
CUDA available: False
>>> print(f"CUDA version: {torch.version.cuda}")
CUDA version: None
>>>

I fixed this by uninstalling and reinstalling and the only real difference was that I cleared out the ‘GPU’ RAM first.

ollama stop gpt-oss:120b
sudo sh -c 'sync; echo 3 > /proc/sys/vm/drop_caches'
pip uninstall torch torchvision torchaudio -y
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu130

Now ComfyUI runs fine and I have been able to generate images on the GB10.

Not sure it was the same problem, or in the meantime something in the repo changes, but actually I was able to run the tutorial on my device as well, with the same exact command as before :D
Thanks to share your experience .

p.s. local AI image generation is really awesome in this little box.
Can’t wait to move over other tutorial and topic ;)

1 Like

Just FYI, cu129 version is noticeably faster than cu130 version on Spark. At least it was a week ago.
On the same default Qwen-Image workflow, the difference is 96 seconds for cu129 and 139 seconds for cu130.

1 Like