Building with LibTorch fails: cmake reports "/usr/bin/ld: cannot find -lCUDA_cublas_device_LIBRARY-NOTFOUND"

Hello everyone,

I am working on a Jetson TX2 with the latest Jetpack (4.6.1). I installed Pytorch 1.9.0 using the official wheel file provided by Nvidia and I am trying to compile a small C++ example using LibTorch.

Cmake fails at the linking stage because it reports it cannot find CUDA_cublas_device.

First of all, it’s not clear to me where this include comes from (I have printed the TORCH_LIBRARIES and TORCH_CXX_FLAGS and neither mentions libcublas). Has anyone found a solution to this problem?

CMake version is 3.10

Hi @febo, do you have the CUDA toolkit installed on your device? (you should find it under /usr/local/cuda)

Also, is CUDA in your PATH and LD_LIBRARY_PATH?

export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda/lib64\
                         {LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

Hi @dusty_nv thanks for your reply. Yes, Cuda is installed and reachable through the PATH and LD_LIBRARY_PATH env vars.

After further research, it seems to be a CMake bug (see Merge branch 'cuda-no-cublas_device' into release-3.12 (a173118f) · Commits · CMake / CMake · GitLab and Merge branch 'FindCUDA-deprecate-cublas_device' into release-3.12 (d5151129) · Commits · CMake / CMake · GitLab )

At this point I should upgrade CMake to 3.12.2, but I wonder if this is going to break somthing else inside Jetpack 4.6.1: do you have experience with that? I could also cherry pick those two commits but then I would be forced to recompile cmake which is something I wish to avoid if possible.

Solved by changing the file /usr/share/cmake-3.10/Modules/FindCUDA.cmake as detailed in the second link above (simply add AND CUDA_VERSION VERSION_LESS "9.2" at line 960).

OK gotcha, glad you were able to get it working - yes, I’ve upgraded CMake before (but only in containers), so I think that simply editing your FindCUDA.cmake is probably easier for the time being.

If you do need to upgrade CMake at some point, these are the steps I follow to do it inside docker (you can run these same commands outside of containers) either via apt or pip: https://github.com/dusty-nv/jetson-containers/blob/59f840abbb99f22914a7b2471da829b3dd56122e/Dockerfile.pytorch#L106

1 Like

Thanks a lot @dusty_nv for the support & useful link! Have a nice day

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.