Unable to access memory at location: cuda gdb in fortran

Hello,
I am new to cuda fortran and am currently trying to learn the cuda-gdb but have run into some issues. First, I am unable to print subroutine input variables when debugging inside the kernel as I receive the message “cannot access memory at address: 0x5” where the second number (5) changes depending on the size of the array.

In addition to this I am fully unable to see any shared variables that the current thread has access to. When I try to print them I get "No symbol “variable name” in current context.’

Below is a screenshot of my attempts to access the input and shared variables along with a minimum reproducing example and my compilation flags

flags: nvfortran -O0 -g -gpu=debug,nordc -cuda scratch.f90

Example:

module m
    use cudafor

    contains

    attributes(global) subroutine var_vis_test(array_d, n)
    ! Declare input variable
    integer, value :: n
    real :: array_d(:)

    ! Declare internal subroutine variables
    real, shared :: array_shared(n)

    if (threadIdx%x <= n) array_shared(threadIdx%x) = 1.0
    if (threadIdx%x <= size(array_d)) array_d(threadIdx%x) = array_d(threadIdx%x) + array_shared(threadIdx%x)
    end subroutine var_vis_test

end module m

program variable_visability_test
    use m
    use cudafor

    real :: array_h(5)
    real, device :: a_d(5)

    integer :: n = 4

    print*, 'Array before subroutine: ', array_h

    a_d = array_h
    call var_vis_test<<<1,n>>>(a_d, n)
    array_h = a_d

    print*, 'Array after subroutine: ', array_h


end program variable_visability_test

Other potentially pertinent info:
1. This behavior persists whether I have the input variable declared as device or not in the kernel
2. Making the file “scratch.cuf” rather than “scratch.f90” and removing the -cuda flag doesn’t affect this behavior
3. When trying to use the debugger GUI with nsight visual studio code edition, anytime I try to enter a kernel I get this message in the VSCode debugging terminal and then it hangs
"
cuda-gdb has received a SIGSEGV and will attempt to get its own backtrace."
4. I am using cuda 12.7 installed on WSL2 that runs Ubuntu 24.04 with Driver Version: 566.36, but compared this behavior to a native linux OS (also Ubuntu 24.04 but with cuda 12.4 and Driver Version: 550.120 ) and saw the same behavior so am not inclined to say its a WSL2 issue.

Any help on this is greatly appreciated as it’s been driving my nuts trying to figure out what’s wrong.

Thank you very much for your time!

-Gabe D.

Hi Gabe,

Thanks for reaching out. I can confirm that there is a few compiler issues present here. We will work on getting this fixed in a future release of the NVIDIA HPC SDK.

1 Like

Sounds good. Thanks for that update. Any idea if older versions of Cuda had this problem too? I’m tempted to try one or two.