Given users are free to install the CUDA SDK in any base location they like, my assumption is that the locations of a CUDA installation cmake uses is configurable.
I did find the following page: FindCUDAToolkit — CMake 3.28.0-rc6 Documentation. Looks like you may need to set " `-DCUDAToolkit_ROOT=/path/to/cuda/installation", when using installations other than “/usr/local/cuda”. Though not being an expert on cmake myself, questions on using cmake are probably best addressed by Kitware (the makers of cmake).
The problem is that ANY location such as /usr/local/cuda cannot refer to both of these trees simultaneously.
Again, I’m not an expert in cmake, but my assumption would be that CUDA_INCLUDE_DIRS and CUDA_CUDART_LIBRARY refer to directories under the root CUDA installation. So fixing where cmake finds the CUDA install’s root directory, may allow these directories to be found as well.
Also, the SDK has a combination of options:
- 2020? 20.5?
- 10.1? 10.2? 11.0?
“20.5” is the HPC Compiler (formerly PGI) installation for the 20.5 release. Additional releases can be co-installed, so if you install the up coming 20.7 release, it would be installed next to 20.5 but not overwrite it.
“2020” is a common directory for packages, such as OpenMPI or NetCDF, shared by all HPC Compilers released in 2020. No need to reinstall them if you install a new version of the compilers.
“10.1”, “10.2”, and “11.0” are the various CUDA installations packaged with the compilers for convenience. There’s no need to install them separately, but you certainly can configure the compilers to use you’re own CUDA SDK installation.
In an earlier thread it sounded like I need to match one of these with the version number of the driver, but the SDK documentation is silent.
Details about the CUDA installations and configuration can be found in the HPC Compiler’s User’s Guide: HPC Compilers User's Guide Version 23.11 for ARM, OpenPower, x86
As noted in the documentation, the HPC Compilers do check the CUDA Driver version on the system to set the default CUDA version to use, but this is easily overridden via command line options or setting environment variables such as CUDA_HOME.
I also note that nvc++ is located ONLY here, with no other versions anywhere else:
/opt/nvidia/hpc_sdk/Linux_x86_64/20.5/compilers/bin/nvc++
I’m rather confused at this point.
nvc++ is the HPC C++ compiler and does not currently support compiling CUDA C programs. For CUDA C, you need to use the nvcc compiler. The HPC Compiler bin directory does include a nvcc compiler, but it’s actually just a wrapper which invokes the correct nvcc from the various CUDA co-installs depending on which version is being used.
The HPC SDK is a bundle of various NVIDIA and third-party products: the HPC Compilers, multiple versions of CUDA, profilers, CUDA enabled math libraries, and builds of third-party libraries. It’s not a single product. Perhaps that’s where the confusion is?