How to install multi-CUDA versions HPC-SDK (Why NVHPC_DEFAULT_CUDA does not take effect?)

According to the NVIDIA HPC SDK Installation Guide, although I export NVHPC_DEFAULT_CUDA=10.2 on terminal, the default cuda is still 22.7.
Steps:

  1. export NVHPC_DEFAULT_CUDA=10.2 # define default cuda version
  2. ./install # install the nvhpc_2022_227_Linux_x86_64_cuda_multi
    However, after the installer complete, I found that the symlinks in the directory /opt/nvidia/hpc_sdk/Linux_x86_64/22.7/cuda are as follows:
    bin → 11.7/bin
    include → 11.7/include
    lib64 → 11.7/lib64
    nvvm → 11.7/nvvm

In addition, I found that the output of /opt/nvidia/hpc_sdk/Linux_x86_64/compilers/bin/nvc -printcudaversion is 11.7.
Why?

Does NVHPC_DEFAULT_CUDA take any effect?
How to install HPC-SDK to support older CUDA-toolkit?

My environment is as follows:
OS: REHL 8.5
NVIDIA GPU Driver: 470.86 (nvidia-x11-drv-libs-470.86-2.el8_5.elrepo.x86_64, nvidia-x11-drv–470.86-2.el8_5.elrepo.x86_64, kmod-nvidia-470.86-2.el8_5.elrepo.x86_64)
HPC-SDK: nvhpc_2022_227_Linux_x86_64_cuda_multi

Moved to the HPC SDK forum

Hi ysliu,

The default CUDA is what’s used when no CUDA driver is found, otherwise the compiler will use the CUDA version that best matches the CUDA driver found on the system, or what’s set via the “-gpu=cudaXX.y” flag.

The “NVHPC_DEFAULT_CUDA” environment variable only effects this default setting (which is set in the generated compiler configuration file, “localrc”). The links to the latest CUDA version are separate and do not change when setting the environment variable.

Hope this helps,
Mat

Ok! Thanks a lot! I will try the flag “-gpu=cudaXX.y”.