OSError: libcufft.so.10: cannot open shared object file: No such file or directory

Hello,

I have been working with YOLOv8 on my Jetson Orin NX for a couple of weeks, but executing everything on the CPU because cuda was not available eventhough I had it installed. Now, I am trying to solve this issue to be able to execute using the GPU and train my models faster, but I haven’t found the solution yet.
For context, I have JetPack 5.1.2, Python 3.8 and cuda 12.4. After some research, I found that the version of Pytorch that I needed to install in order to be compatible with JetPack was 2.1.0a. I uninstalled the previous version of pythorch and followed this tutorial below to install the new one.

After the installation, when I import torch, the output is:
Traceback (most recent call last):
File “/home/laura/.local/lib/python3.8/site-packages/torch/init.py”, line 168, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File “/usr/lib/python3.8/ctypes/init.py”, line 373, in init
self._handle = _dlopen(self._name, mode)
OSError: libcufft.so.10: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “”, line 1, in
File “/home/laura/.local/lib/python3.8/site-packages/torch/init.py”, line 228, in
_load_global_deps()
File “/home/laura/.local/lib/python3.8/site-packages/torch/init.py”, line 189, in _load_global_deps
_preload_cuda_deps(lib_folder, lib_name)
File “/home/laura/.local/lib/python3.8/site-packages/torch/init.py”, line 154, in _preload_cuda_deps
raise ValueError(f"{lib_name} not found in the system path {sys.path}")
ValueError: libcublas.so.*[0-9] not found in the system path [‘’, ‘/usr/lib/python38.zip’, ‘/usr/lib/python3.8’, ‘/usr/lib/python3.8/lib-dynload’, ‘/home/laura/.local/lib/python3.8/site-packages’, ‘/usr/local/lib/python3.8/dist-packages’, ‘/usr/lib/python3/dist-packages’, ‘/usr/lib/python3.8/dist-packages’]

I made some research and I have found that this error could be related to libraries and the cuda version, but I’m not sure if installing another version would work, as I already tried with cuda 11.4, 12.4 and 12.6.
Just in case it’s useful, when I run the command $ls /usr/local/cuda-12.4/lib64 the output is:


Any idea about how to solve this would be welcomed.
Thank you in advance!

Hi,

The default CUDA version in JetPack 5.1.2 should be 11.4.
Do you manually upgrade it?

But the latest CUDA for JetPack 5, which use Ubuntu 20.04, is 12.2.
Could you tell us how you install the CUDA 12.4?

We share the prebuilt PyTorch with GPU enabled in the link you shared above.
But you will need to use the default CUDA version.

For other combination, you can build PyTorch from the source to get the package (ex. JetPack 5+CUDA 12.2).
The building instructions can be found in the below topic:

Thanks.

Hello,
I’ve tried to install CUDA 11.4 and import Torch again. The error I encountered when importing Torch has been resolved. However, I still can’t use CUDA. When I run print(torch.cuda.is_available()), it returns False.
When I run python3 -m torch.utils.collect_env the output is:

Collecting environment information...
PyTorch version: 2.1.0a0+41361538.nv23.06
Is debug build: False
CUDA used to build PyTorch: 11.4
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (aarch64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31

Python version: 3.8.10 (default, Sep 11 2024, 16:02:53)  [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.120-tegra-aarch64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: 11.4.48
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False

CPU:
Architecture:                    aarch64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
CPU(s):                          8
On-line CPU(s) list:             0-7
Thread(s) per core:              1
Core(s) per socket:              4
Socket(s):                       2
Vendor ID:                       ARM
Model:                           1
Model name:                      ARMv8 Processor rev 1 (v8l)
Stepping:                        r0p1
CPU max MHz:                     1984,0000
CPU min MHz:                     115,2000
BogoMIPS:                        62.50
L1d cache:                       512 KiB
L1i cache:                       512 KiB
L2 cache:                        2 MiB
L3 cache:                        4 MiB
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:        Mitigation; __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; CSV2, but not BHB
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp uscat ilrcpc flagm

Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.1.0a0+41361538.nv23.6
[conda] Could not collect
'

Do you have any suggestions on how to proceed next?
Thank you in advance.

Hi,

Based on below log:

Is CUDA available: False

Have you installed the CUDA and cuDNN package on the device?
These files can be found in the JetPack components.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.