I was wondering if the HPC SDK could (or does) make an “activation” script as part of its installation?
This activation script would be one that the user can source in order to set all PATHs and LD_LIBRARY_PATHs (as well as other ENVs like CUDA_HOME) for using the SDK.
I could not find such a script or even documentation as to all the various paths to include in the users environment. I made my own for now, but an automatically generated script would be very useful.
Hi Ron, welcome to the HPC SDK Forum.
We include module files. I usually just set my path to the compilers/bin directory, and everything I do just works. Can you give some detail of when you need to set more than that? (MPI, of course…)
I just looked at the module files and they do pretty much what I do (using thee 2020 one and the MPI one). I do not have “module” installed on my local systems, and it is not available in the standard APT repositories. As such, a user just has to know which directories to add to which paths.
Also, it seems there are duplicate binaries for some things.
For example, there is a nsight systems “nsys” in the compiler bin folder but also in the profiler/NsightSystems bin folder and they are not the same.
If I want to use nsight systems or nsight compute, which bin should I use?
One other thing I set up manually is the CUDA library path, CUDA path, and the “CUDA_HOME” env.
I do not think the module files do this.
I don’t know if they are necessary for compiled code but since I have an ubuntu-installed cuda library on my system, I wanted to make sure there are no conflicts.
A simple bash script to source that takes care of all of this in one shot would be nice.
One other thing I noticed in the module files is that they set CC=nvc, FC=nvfortran, etc. This breaks features of a lot of source code packages (such as zlib, hdf5, etc.) as their configure scripts have special sections for PGI but they do not know about the NV compilers. It might be better for now to use the legacy PGI names in the module file or at least provide an alternative module file that does so. I can see a lot of people having issues with this.
Good feedback, thanks.
We provide drivers/executables in our compilers/bin directory that parse a minimal set of options (like the CUDA version) and invoke the correct CUDA executable, like nsys and nvcc, under the CUDA directories. For experienced developers this might be too much magic, but for new users, hopefully they will like this.
Do you use the -Mcudalib option, or do you add the CUDA library paths and names manually? If you use -Mcudalib, you shouldn’t need to set a library path. Unless you want to use your own library area.
Good point about the module environment variables. Maybe we need two sets of modules.