For all other libraries (including libcudart.so.12 which is in the same directory as libnvJitLink.so.12 ), I do not need to use an explicit LD_LIBRARY_PATH (presumably since the compiler is using some sort of rpath?).
It would be useful to also have this library be linked in the same manner to avoid having to set an LD_LIBRARY_PATH in the compiler environment setup.
libnvJitLink.so is new in CUDA 12 so likely why it worked before. Though for good or bad, I’m not able to recreate the issue here. While we don’t link this library directly, rather it’s a dependent library for libcusparse.so, we do use the rpath to the CUDA runtime libraries at link, so the loader should be able to find it.
Can you run “ldd” on libcusparse.so and your executable so we can see how the loader is resolving the paths?
For example, here’s what it looks link on my system. Note that I do not have LD_LIBRARY_PATH set.
Also, what CUDA version is being used? (if you don’t know, check the CUDA driver via nvidia-smi)
Besides -cudalib=cusparse, what other flags do you use on the link?
I did a local install of 24.3. While the loader was able to find libnvJitLink, it found it under a local CUDA install rather than the one that we ship with NVHPC.
Likely what’s going on is that since it isn’t on the link line, given it’s a dependent library, the loader doesn’t use the rpath for it.
I think the solution here is for us to implicitly add it to the link line so it’s rpath gets set. I’ve opened a report, TPR #35636.
Besides setting LD_LIBRARY_PATH, a work around for you would be to add “-lnvJitLink” to your link.
On a possibly related note, when I try to install POT3D on WSL on Windows 11, I get an error saying it cannot find “-lcuda”.
I have the NV HPC SDK installed, but I do NOT have the CUDA toolkit installed on either the WSL or windows directly. I do have the NVIDIA App installed on Win11 and the latest game-ready drivers.
This is because the CUDA driver (libcuda.so) gets installed in a non-default location on WSL. Hence the linker and loader can’t find it (at least by default)
Though, you shouldn’t need to link with -lcuda unless you’re calling the CUDA device API directly. Typically, this gets dynamically loaded (via dlopen) at runtime. Because of this, I don’t think (but not sure) that adding an rpath to it would help. You’ll likely need to add the path to the LD_LIBRARY_PATH. Adding it to your shell config file (like .bashrc) will save you having to set it each time.
Hi Ron,
Our 24.5 release is now out, and we made a change for FS#35636 referenced above, and now libnvJitLink will be added to the link line when you use cusparse with -cudalib.
FYI, I use WSL on my laptop and have the same issue as you. I have this in my .bashrc:
export PATH=/opt/nvidia/hpc_sdk/Linux_x86_64/24.3/compilers/bin:$PATH
export LD_LIBRARY_PATH=/usr/lib/wsl/lib
In case useful, this solution process works for me:
as:
Problem: dynamic linker (ld-linux.so) cannot find libnvJitLink.so.12 when running CUDA application
Solution: path to libnvJitLink.so.12 as LD_LIBRARY_PATH must be found and set manually: in every python venv env, starting every env session (when alternating between many envs, maybe)