Jetpack 6.2 and Pytorch 2.6.0 on Jetson Nano Orin

Hi everyone,

I am having some trouble getting torch to work with cuda support on my Jetson Orin Nano. I am using Jetpack 6.2 (Ubuntu 22.04 and CUDA 12.6), Python 3.10.12, and

torch-2.6.0-cp310-cp310-linux_aarch64.whl
torchvision-0.21.0-cp310-cp310-linux_aarch64.whl

as recommended in another forum. However, torch.cuda.is_available() still returns False.

Any and all help is appreciated. Thanks!

1 Like

I am experiencing the same. I was looking at Libnvdla_compiler.so: cannot open shared object file: No such file or directory - #2 by spolisetty , but there doesn’t seem to be an explanation or solution to missing libnvdla_runtime.so .

What is your output for: ls -ll /usr/lib/aarch64-linux-gnu/tegra/libnvdla*

I only have /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so

I haven’t had problems with any so’s:

kevin@kevin-desktop:~/Downloads$ ls -ll /usr/lib/aarch64-linux-gnu/tegra/libnvdla*
-rw-r–r-- 1 root root 8159168 Jan 7 20:41 /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so
-rw-r–r-- 1 root root 6499168 Jan 7 20:41 /usr/lib/aarch64-linux-gnu/tegra/libnvdla_runtime.so

But nonetheless I can’t get torch and cuda to hold hands and sing Kumbaya. I’ve double checked the compatibility matrix here: Start Locally | PyTorch , tried building from source, triple checked python version, cuda version, and set every flag I could find, but no luck. The error I got when building from source was:
ModuleNotFoundError: No module named ‘typing_extensions’
CMake Error at cmake/Codegen.cmake:225 (message):
Failed to get generated_headers list
Call Stack (most recent call first):
caffe2/CMakeLists.txt:2 (include)

but of course type_extensions was already installed. No idea where to go from here.

1 Like

Hi,

There is atorch-2.5 and torchvision-0.20 install step in below

Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.