Jetson Nano Failed in SimpleCUFFT sample

Hi, I am new to the Cuda family and I recently got a Jetson Nano.
I am testing the simpleCUFFT sample in the cuda-10.2 library and I came across some issues.

I was able to run the sample as it is but I am interested in seeing how the program can handle a large signal so I changed the length to the following:

// The filter size is assumed to be a number smaller than the signal size
#define SIGNAL_SIZE 10000000
#define FILTER_KERNEL_SIZE 128

And that’s the only change I have made to the code. But the program seemed to be killed with this change, without further explanation.

[simpleCUFFT] is starting...
GPU Device 0: "Maxwell" with compute capability 5.3

Killed

I also ran deviceQuery to check if Cuda is functional at all. The result is the following:

./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA Tegra X1"
  CUDA Driver Version / Runtime Version          10.2 / 10.2
  CUDA Capability Major/Minor version number:    5.3
  Total amount of global memory:                 1972 MBytes (2067730432 bytes)
  ( 1) Multiprocessors, (128) CUDA Cores/MP:     128 CUDA Cores
  GPU Max Clock rate:                            922 MHz (0.92 GHz)
  Memory Clock rate:                             13 Mhz
  Memory Bus Width:                              64-bit
  L2 Cache Size:                                 262144 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS

So I think the issue is probably there isn’t enough memory to create a buffer size on GPU for a signal of 10 million. But how can I check if this is true? Any help is appreciated!

I suggest compile with debug flag and testing the code with cuda-gdb.

I am not too familiar with cuda gdb, do you have any recommendations on how to get started and specifically what to look at given my problem?

Please read

https://developer.download.nvidia.com/GTC/PDF/1062_Satoor.pdf

jetson nano has a kernel timeout:

By increasing the problem size, you may have caused the kernel duration to increase to the point where the watchdog killed it.

You may wish to ask such Jetson-nano specific questions on the Jetson nano forum.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.