Cuda code fails to find GPU


I have installed (I hope) and can compile a simple program referencing my GPU, but when I run it I cannot seem to detect the GPU I get a similar error when I build some example from the ‘CUDA-SAMPLES’ from github.

SO my simple code tries to use openacc library to detect the GPU. e.g.


which returns

and is compiled like so…

For the github cuda-samples code I have built and run I get errors similar to the following.

Here is the output from pgcaccel and nvidia-smi, lsmod, and devices…



Hopefully its something simple. I am using a Tesla GPU in a dell c4130 blade as a data server configuration, so not a graphics card. I want to use the GPU for computation.

Thank you in advance. I have exhausted my skill set here !!! (did not take long ??)



OK, solved, I use ‘call’ for what appears to be a function and from some other historical issues I had the ‘call’ sneaks through some of the compiler without flagging a concern. So removing the ‘call’ and replacing with an ‘j=’ made it work OK.

Thank you

…and using include ‘accel_lib.h’

just in case anyone else has this issue

If you had used “implicit none” then you would have seen an error like:

NVFORTRAN-S-0038-Symbol, acc_get_num_devices, has not been explicitly declared (test.F90: 1)
NVFORTRAN-S-0038-Symbol, acc_device_nvidia, has not been explicitly declared (test.F90: 1)

The problem with you code is when implicit typing is done “acc_device_nvidia” becomes a real typed variable not an integer parameter as expected.

and using include ‘accel_lib.h’

“accel_lib.h” is more for F77 style programming and is for our OpenACC extensions, not the OpenACC API.

The standard module which you should use is “use openacc”. You can then use “use accel_lib” if you need our extended API such printing the present table, “acc_present_dump”.

Hope this helps,

You legend, please clone yourself for future generations !!

Thank you