Hi everyone,
I often turn to cuda-gdb for developing my app. It runs quite a long time in comparison to running the normal app as a debug binary, around 100 times as slow, I would guess.
Recently, I prematurely stopped my app, and found a backtrace that suggested cuda-gdb is allocating and deallocating lots of memory as it runs. I suppose this explains the slowdown. Here’s my backtrace:
Thread 1 "openmc" received signal SIGINT, Interrupt.
0x00007ffff6635b56 in malloc_consolidate () from /usr/lib/libc.so.6
(cuda-gdb) frame
#0 0x00007ffff6635b56 in malloc_consolidate () from /usr/lib/libc.so.6
(cuda-gdb) bt
#0 0x00007ffff6635b56 in malloc_consolidate () from /usr/lib/libc.so.6
#1 0x00007ffff66364a0 in _int_free () from /usr/lib/libc.so.6
#2 0x00007ffff6639ca8 in free () from /usr/lib/libc.so.6
#3 0x00007ffff5103f89 in ?? () from /usr/lib/libcuda.so.1
#4 0x00007ffff5149fed in ?? () from /usr/lib/libcuda.so.1
#5 0x00007ffff5268526 in ?? () from /usr/lib/libcuda.so.1
#6 0x00007ffff5107219 in ?? () from /usr/lib/libcuda.so.1
#7 0x00007ffff502d11e in ?? () from /usr/lib/libcuda.so.1
#8 0x00007ffff51b8066 in ?? () from /usr/lib/libcuda.so.1
#9 0x00007ffff5059c59 in ?? () from /usr/lib/libcuda.so.1
#10 0x00007ffff52834f0 in ?? () from /usr/lib/libcuda.so.1
#11 0x00007ffff4fdf099 in ?? () from /usr/lib/libcuda.so.1
#12 0x00007ffff4fff77c in ?? () from /usr/lib/libcuda.so.1
#13 0x00007ffff50c7eba in ?? () from /usr/lib/libcuda.so.1
#14 0x00007ffff76dec8b in __cudart841 () from /usr/local/lib/libopenmc.so
#15 0x00007ffff77323b0 in cudaLaunchKernel () from /usr/local/lib/libopenmc.so
#16 0x00007ffff74e0546 in cudaLaunchKernel<char> (
func=0x7ffff74e0dc8 <openmc::gpu::process_advance_events_device<256u>(openmc::EventQueueItem*, unsigned int, openmc::EventQueueItem*, openmc::EventQueueItem*)> "UH\211\345H\203\354 H\211}\370\211u\364H\211U\350H\211M\340H\215M\340H\215U\350H\215u\364H\215E\370H\211\307\350\265\353\377\377\220\311\303UH\211\345H\203\354 H\211}\370\211u\364H\211U\350H\211M\340H\215M\340H\215U\350H\215u\364H\215E\370H\211\307\350W\355\377\377\220\311\303UH\211\345H\203\354 H\211}\370\211u\364H\211U\350H\211M\340H\215M\340H\215U\350H\215u\364H\215E\370H\211\307\350\371\356\377\377\220\311\303UH\211\345ATSH\203\354\020H\211}\350\211u\344H\213E\350\213U\344\211P\bH\213E\350\213P\bH\213E\350\213@\f9\302vN\277\020", gridDim=..., blockDim=...,
args=0x7fffffffdf00, sharedMem=0,
stream=0x0 <openmc::vector<double, openmc::UnifiedAllocator<double> >::at(unsigned int) const>)
at /opt/cuda/bin/../targets/x86_64-linux/include/cuda_runtime.h:211
#17 0x00007ffff74df992 in __device_stub__ZN6openmc3gpu29process_advance_events_deviceILj256EEEvPNS_14EventQueueItemEjS3_S3_ (__par0=0x7fffa5e00000, __par1=36285, __par2=0x7fffa0000000, __par3=0x7fffa0200000)
at /tmp/tmpxft_00001014_00000000-6_event.cudafe1.stub.c:10
#18 0x00007ffff74df9e7 in openmc::gpu::__wrapper__device_stub_process_advance_events_device<256> (
__cuda_0=@0x7fffffffdf88: 0x7fffa5e00000, __cuda_1=@0x7fffffffdf84: 36285,
__cuda_2=@0x7fffffffdf78: 0x7fffa0000000, __cuda_3=@0x7fffffffdf70: 0x7fffa0200000)
at /tmp/tmpxft_00001014_00000000-6_event.cudafe1.stub.c:13
#19 0x00007ffff74e0df7 in openmc::gpu::process_advance_events_device<256u> (queue=0x7fffa5e00000,
queue_size=36285, surface_crossing_queue=0x7fffa0000000, collision_queue=0x7fffa0200000)
I don’t know what this memory is being used for, but is there perhaps some way to ask cuda-gdb to allocate all this memory up front rather than what appears to be allocating and deallocating very much? It would be very helpful, if so. Or maybe this is unavoidable. I don’t immediately see anything relevant in its documentation.
Thanks,
Gavin Ridley
Hi @gavin.keith.ridley
Thank you very much for reporting this issue. Could you also share some additional details about the issue:
- CUDA GDB version: output of the
cuda-gdb --version
command.
- CUDA version: output of the
nvidia-smi
command.
- If you have CUDA samples installed, could you also share the
deviceQuery
sample output.
Also is there a way for us to reproduce this problem locally? E.g. do you have a sample source code, which triggers the issue, which you can share?
Hey, yep!
NVIDIA (R) CUDA Debugger
11.3 release
Portions Copyright (C) 2007-2021 NVIDIA Corporation
GNU gdb (GDB) 8.3.1
Tue May 4 09:42:24 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.24.02 Driver Version: 465.24.02 CUDA Version: 11.3 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA TITAN V Off | 00000000:65:00.0 On | N/A |
| 28% 32C P8 23W / 250W | 133MiB / 12064MiB | 6% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "NVIDIA TITAN V"
CUDA Driver Version / Runtime Version 11.3 / 11.3
CUDA Capability Major/Minor version number: 7.0
Total amount of global memory: 12064 MBytes (12650348544 bytes)
(80) Multiprocessors, ( 64) CUDA Cores/MP: 5120 CUDA Cores
GPU Max Clock rate: 1455 MHz (1.46 GHz)
Memory Clock rate: 850 Mhz
Memory Bus Width: 3072-bit
L2 Cache Size: 4718592 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total shared memory per multiprocessor: 98304 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 7 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Managed Memory: Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 101 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.3, CUDA Runtime Version = 11.3, NumDevs = 1
Result = PASS
As for reproducing this, it’s tough for me to do. It seems this is perhaps happening when I do an access to unified memory. It appears that the code here is accessing unified memory I’ve allocated, but I was unable to reproduce a slowdown in a smaller program doing that. Maybe one of you with knowledge on how cuda-gdb works could give me better hints on where to look for the cause of this.
Hi @gavin.keith.ridley
Thank you for the additional information, we are looking at the issue.
While we are looking at the issue, could you check whether you have break_on_launch
config option enabled? This option can significantly increase the overhead. Please try (in cuda-gdb
shell):
(cuda-gdb) show cuda break_on_launch
(cuda-gdb) set cuda break_on_launch none # if previous command output is not none
Unfortunately, no, I don’t have it turned on.
(cuda-gdb) show cuda break_on_launch
Break on every kernel launch is set to 'none'.
Let me know if your search ends up as a dead end. I can work on pulling a kernel out of my app and making faux data to go with it. I would prefer to send it over email, if we did do that.
Hi @gavin.keith.ridley
Unfortunately we were not able to identify any obvious reasons for the slowdown you are experiencing. If possible we would need to have a repro case for this issue.
I can work on pulling a kernel out of my app and making faux data to go with it. I would prefer to send it over email, if we did do that.
Could you please send the files to devtools-support@nvidia.com
Once again, thank you very much for spending your time helping us to improve our product!
Well thanks for looking into this AKravets. I’ll send you an MWE as soon as possible, hopefully within one month from now.