Jetson Xavier - nsight remote debug problems with cuda examples (several of them)-JetPack 4.1.1 with cuda 10.0

Hi,

I have a new nvidia Jetson Xavier board brought from Nvidia.
Installed JetPack 4.1.1 as per the JetPack installation guide.

Setup:
Ubuntu 18.04 Host Machine - to compile and do remote cuda-dbg on the board jetson xavier
- Host PC ubuntu 18.04 has all of cuda and all other packages from JetPack 4.1.1 installed
Jetson Xavier - all of JetPack 4.11 , OS flash and other cuda … etc installed on the Jetson Xavier

I have taken the cuda sample examples and copied to ~/NVIDIA_samples.
All the libraries and settings of nsight - are done as per the Jetson TX2 described in a nvidia doc

I have tried numerous examples like box_filter , matrix_mul , convolution_texture but all have the same problem and errors described below: please adise or help on debug this nsight problem.


Description of the error:

  1. With no breakpoint set , the cod’e runs and gives results - but the MIPS is in GFLOPS ( not sure it is at full speed )
  2. With one normal breakpoint set at main - it comes upto the main breakpoint and if I say run or resume in night it hangs with the errors doc attached ( 3 console messages 1.gdb.txt , 2.remoteshell.txt , 3.gdbtraces.txt , host Machine - g++ version and cuda version - g++.txt , nvcc–version.txt) . Also I have attached an eclipsensight.odf - screen snapshot of the nsight debugger.( the rest I copied , I dont know how to copy an .odf image here )
  3. If I seta break point on the kernek i.e in the right corner cuda window - enable breakpoin on all application kernel launches - it still hangs .

Please take a look and help to debug this issue.

Regards
francis j

—gdb.txt-----
Coalescing of the CUDA commands output is off.
warning: “remote:” is deprecated, use “target:” instead.
warning: sysroot set to “target://”.
Reading /lib/ld-linux-aarch64.so.1 from remote target…
warning: File transfers from remote targets can be slow. Use “set sysroot” to access files locally instead.
Reading /lib/ld-linux-aarch64.so.1 from remote target…
Reading /lib/ld-2.27.so from remote target…
Reading /lib/.debug/ld-2.27.so from remote target…
0x0000007fb7fd31c0 in ?? () from target:/lib/ld-linux-aarch64.so.1
$1 = 0xff
The target endianness is set automatically (currently little endian)
Reading /lib/aarch64-linux-gnu/librt.so.1 from remote target…
Reading /lib/aarch64-linux-gnu/libpthread.so.0 from remote target…
Reading /lib/aarch64-linux-gnu/libdl.so.2 from remote target…
Reading /usr/lib/aarch64-linux-gnu/libstdc++.so.6 from remote target…
Reading /lib/aarch64-linux-gnu/libgcc_s.so.1 from remote target…
Reading /lib/aarch64-linux-gnu/libc.so.6 from remote target…
Reading /lib/aarch64-linux-gnu/libm.so.6 from remote target…
Reading /lib/aarch64-linux-gnu/librt-2.27.so from remote target…
Reading /lib/aarch64-linux-gnu/.debug/librt-2.27.so from remote target…
Reading /lib/aarch64-linux-gnu/47f37309461cc15fb1915bc198d718017a1f87.debug from remote target…
Reading /lib/aarch64-linux-gnu/.debug/47f37309461cc15fb1915bc198d718017a1f87.debug from remote target…
Reading /lib/aarch64-linux-gnu/libdl-2.27.so from remote target…
Reading /lib/aarch64-linux-gnu/.debug/libdl-2.27.so from remote target…
Reading /usr/lib/aarch64-linux-gnu/ec888879161599d09ef86dc9b55f5096935334.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/.debug/ec888879161599d09ef86dc9b55f5096935334.debug from remote target…
Reading /lib/aarch64-linux-gnu/866070a0bdee074a8459fae7b95155fdd12861.debug from remote target…
Reading /lib/aarch64-linux-gnu/.debug/866070a0bdee074a8459fae7b95155fdd12861.debug from remote target…
Reading /lib/aarch64-linux-gnu/libc-2.27.so from remote target…
Reading /lib/aarch64-linux-gnu/.debug/libc-2.27.so from remote target…
Reading /lib/aarch64-linux-gnu/libm-2.27.so from remote target…
Reading /lib/aarch64-linux-gnu/.debug/libm-2.27.so from remote target…

Temporary breakpoint 1, main (argc=1, argv=0x7ffffff4e8) at …/src/matrixMul.cu:277
277 int main(int argc, char **argv) {
Reading /usr/lib/aarch64-linux-gnu/tegra/libcuda.so.1 from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvrm_gpu.so from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvrm.so from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvrm_graphics.so from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvidia-fatbinaryloader.so.31.1.0 from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvos.so from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libcuda.so.1.1.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/.debug/libcuda.so.1.1.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvrm_gpu.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/.debug/libnvrm_gpu.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvrm.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/.debug/libnvrm.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvrm_graphics.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/.debug/libnvrm_graphics.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvidia-fatbinaryloader.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/.debug/libnvidia-fatbinaryloader.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvos.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/.debug/libnvos.so.debug from remote target…

---- remote shell .txt -------------

Last login: Mon Mar 18 16:10:40 2019
echo $PWD’>’
/bin/sh -c “cd "/tmp/nsight-debug";export LD_LIBRARY_PATH="/usr/local/cuda-10.0/lib64":${LD_LIBRARY_PATH};export NVPROF_TMPDIR="/tmp";"/usr/local/cuda-10.0/bin/cuda-gdbserver" --cuda-use-lockfile=0 :2345 "/tmp/nsight-debug/matrix_mul"”;exit
nvidia@jetson-0423818077348:~$ echo $PWD’>’
/home/nvidia>
nvidia@jetson-0423818077348:~$ /bin/sh -c “cd "/tmp/nsight-debug";export LD_LIBBRARY_PATH="/usr/local/cuda-10.0/lib64":${LD_LIBRARY_PATH};export NVPROF_TMPDIIR="/tmp";"/usr/local/cuda-10.0/bin/cuda-gdbserver" --cuda-use-lockfile=0 :23345 "/tmp/nsight-debug/matrix_mul"”;exit
Process /tmp/nsight-debug/matrix_mul created; pid = 7546
Listening on port 2345
Remote debugging from host 192.168.10.220
Warning: Adjusting return value of linux_common_core_of_thread (pid=32, tid=32).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=33, tid=33).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=34, tid=34).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=35, tid=35).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=36, tid=36).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=37, tid=37).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=38, tid=38).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=39, tid=39).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=40, tid=40).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=41, tid=41).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=42, tid=42).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=44, tid=44).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=45, tid=45).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=46, tid=46).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=47, tid=47).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=48, tid=48).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=50, tid=50).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=51, tid=51).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=52, tid=52).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=53, tid=53).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=54, tid=54).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=685, tid=685).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=687, tid=687).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=692, tid=692).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=693, tid=693).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=694, tid=694).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=714, tid=714).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=799, tid=799).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=818, tid=818).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=961, tid=961).
core = 7 >= num_cores = 4!

Warning: Adjusting return value of linux_common_core_of_thread (pid=962, tid=962).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=965, tid=965).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1095, tid=1095).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1134, tid=1134).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1205, tid=1205).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1226, tid=1226).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1229, tid=1229).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1350, tid=1350).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1351, tid=1351).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1353, tid=1353).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1356, tid=1356).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1357, tid=1357).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1359, tid=1359).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1362, tid=1362).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1398, tid=1398).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1401, tid=1401).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1410, tid=1410).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1416, tid=1416).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1419, tid=1419).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1431, tid=1431).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1456, tid=1456).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1466, tid=1466).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1467, tid=1467).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1527, tid=1527).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1550, tid=1550).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1591, tid=1591).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1610, tid=1610).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1670, tid=1670).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1672, tid=1672).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1695, tid=1695).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1704, tid=1704).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1845, tid=1845).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1847, tid=1847).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1849, tid=1849).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1851, tid=1851).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1900, tid=1900).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1903, tid=1903).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1907, tid=1907).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1968, tid=1968).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2019, tid=2019).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2058, tid=2058).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2087, tid=2087).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2088, tid=2088).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2226, tid=2226).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2227, tid=2227).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2238, tid=2238).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2248, tid=2248).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2252, tid=2252).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2253, tid=2253).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2254, tid=2254).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2255, tid=2255).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2256, tid=2256).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2257, tid=2257).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2258, tid=2258).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2259, tid=2259).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2262, tid=2262).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2266, tid=2266).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2267, tid=2267).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2268, tid=2268).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2269, tid=2269).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2270, tid=2270).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2271, tid=2271).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2272, tid=2272).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2273, tid=2273).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2275, tid=2275).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2277, tid=2277).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2422, tid=2422).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=4479, tid=4746).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=4479, tid=5276).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=4776, tid=4776).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=4820, tid=4820).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=4861, tid=4861).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=4899, tid=4930).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=4942, tid=4942).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=4958, tid=4958).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=4975, tid=4975).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=5029, tid=5029).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=5248, tid=5248).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=5322, tid=5322).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=5412, tid=5542).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=5556, tid=5556).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=5626, tid=5626).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=5627, tid=5627).
core = 4 >= num_cores = 4!
[Matrix Multiply Using CUDA] - Starting…
GPU Device 0: “Xavier” with compute capability 7.2

MatrixA(320,320), MatrixB(640,320)

------------------ gdbtraces.txt --------------------
491,605 1-gdb-set cuda break_on_launch none
491,606 2-gdb-set cuda kernel_events none
491,607 1^done
491,610 (gdb)
491,610 2^done
491,610 (gdb)
491,643 3-gdb-set cuda coalescing off
491,644 4-gdb-set cuda single_stepping_optimizations on
491,644 5-gdb-set cuda memcheck off
491,644 6-gdb-set cuda software_preemption on
491,644 ~“Coalescing of the CUDA commands output is off.\n”
491,644 7-gdb-set cuda value_extrapolation off
491,646 3^done
491,646 (gdb)
491,646 4^done
491,646 (gdb)
491,646 5^done
491,646 (gdb)
491,646 6^done
491,646 (gdb)
491,646 7^done
491,646 (gdb)
491,666 8-gdb-set cuda api_failures ignore
491,666 8^done
491,666 (gdb)
491,668 9-gdb-set cuda kernel_events none
491,669 9^done
491,669 (gdb)
491,671 10-environment-cd /home/cose/cuda-workspace/matrix_mul
491,671 10^done
491,671 (gdb)
491,676 11-gdb-set breakpoint pending on
491,676 11^done
491,676 (gdb)
491,677 12-gdb-set breakpoint conditional-pending on
491,677 12^done
491,677 (gdb)
491,682 13-gdb-set detach-on-fork on
491,682 13^done
491,682 (gdb)
491,684 14-enable-pretty-printing
491,684 14^done
491,684 (gdb)
491,687 15-gdb-set python print-stack none
491,688 15^done
491,688 (gdb)
491,690 16-gdb-set print object on
491,690 16^done
491,690 (gdb)
491,692 17-gdb-set print sevenbit-strings on
491,693 17^done
491,693 (gdb)
491,694 18-gdb-set host-charset UTF-8
491,695 18^done
491,695 (gdb)
491,697 19-gdb-set target-charset UTF-8
491,697 19^done
491,697 (gdb)
491,702 20-gdb-set target-wide-charset UTF-32
491,702 20^done
491,702 (gdb)
491,706 21source .cuda-gdbinit
491,707 &“source .cuda-gdbinit\n”
491,707 &“.cuda-gdbinit: Aucun fichier ou dossier de ce type.\n”
491,707 21^error,msg=“.cuda-gdbinit: Aucun fichier ou dossier de ce type.”
491,708 (gdb)
491,710 22-gdb-set target-async off
491,710 22^done
491,710 (gdb)
491,713 23-gdb-set auto-solib-add on
491,713 23^done
491,713 (gdb)
491,718 24-gdb-set sysroot remote://
491,718 &“warning: "remote:" is deprecated, use "target:" instead.\n”
491,718 &“warning: sysroot set to "target://".\n”
491,719 24^done
491,719 (gdb)
491,730 25-file-exec-and-symbols /home/cose/cuda-workspace/matrix_mul/Debug/matrix_mul
491,749 25^done
491,749 (gdb)

491,751 26-target-select remote 192.168.10.153:2345
491,760 27-cuda-info-kernels
491,765 =tsv-created,name=“trace_timestamp”,initial=“0”\n
491,773 =thread-group-started,id=“i1”,pid=“7546”
491,774 =thread-created,id=“1”,group-id=“i1”
491,776 28-list-thread-groups --available
491,785 ~“Reading /lib/ld-linux-aarch64.so.1 from remote target…\n”
491,785 &“warning: File transfers from remote targets can be slow. Use "set sysroot" to access fil
es locally instead.\n”
491,822 ~“Reading /lib/ld-linux-aarch64.so.1 from remote target…\n”
491,847 =library-loaded,id=“/lib/ld-linux-aarch64.so.1”,target-name=“/lib/ld-linux-aarch64.so.1”,hos
t-name=“target:/lib/ld-linux-aarch64.so.1”,symbols-loaded=“0”,thread-group=“i1”
491,849 ~“Reading /lib/ld-2.27.so from remote target…\n”
491,850 ~“Reading /lib/.debug/ld-2.27.so from remote target…\n”
491,900 ~“0x0000007fb7fd31c0 in ?? () from target:/lib/ld-linux-aarch64.so.1\n”
491,900 *stopped,frame={addr=“0x0000007fb7fd31c0”,func=“??”,args=,from=“target:/lib/ld-linux-aarch
64.so.1”},thread-id=“1”,stopped-threads=“all”,core=“1”
491,902 29-list-thread-groups
491,915 26^connected
491,915 (gdb)
491,915 27^done,InfoCudaKernelsTable={nr_rows=“0”,nr_cols=“10”,hdr=[{width=“1”,alignment=“1”,col_nam
e=“current”,colhdr=" “},{width=“6”,alignment=“1”,col_name=“kernel”,colhdr=“Kernel”},{width=“6”,align
ment=“1”,col_name=“parent”,colhdr=“Parent”},{width=“3”,alignment=“1”,col_name=“device”,colhdr=“Dev”}
,{width=“4”,alignment=“1”,col_name=“grid”,colhdr=“Grid”},{width=“6”,alignment=“1”,col_name=“status”,
colhdr=“Status”},{width=“8”,alignment=“1”,col_name=“sms_mask”,colhdr=“SMs Mask”},{width=“7”,alignmen
t=“1”,col_name=“gridDim”,colhdr=“GridDim”},{width=“8”,alignment=“1”,col_name=“blockDim”,colhdr=“Bloc
kDim”},{width=“10”,alignment=”-1",col_name=“invocation”,colhdr=“Invocation”}],body=}
491,916 (gdb)
491,922 30-gdb-show language
492,114 28^done,groups=[{id=“1”,type=“process”,description=“/sbin/init”,user=“root”,cores=[“2”]},{id
=“2”,type=“process”,description=“[kthreadd]”,user=“root”,cores=[“3”]},{id=“3”,type=“process”,descrip
tion=“[ksoftirqd/0]”,user=“root”,cores=[“0”]},{id=“4”,type=“process”,description=“[kworker/0:0]”,use
r=“root”,cores=[“0”]},{id=“5”,type=“process”,description=“[kworker/0:0H]”,user=“root”,cores=[“0”]},{
id=“6”,type=“process”,description=“[kworker/u16:0]”,user=“root”,cores=[“2”]},{id=“7”,type=“process”,
description=“[rcu_preempt]”,user=“root”,cores=[“1”]},{id=“8”,type=“process”,description=“[rcu_sched]
“,user=“root”,cores=[“2”]},{id=“9”,type=“process”,description=”[rcu_bh]”,user=“root”,cores=[“0”]},{i
d=“10”,type=“process”,description=“[migration/0]”,user=“root”,cores=[“0”]},{id=“11”,type=“process”,d
escription=“[lru-add-drain]”,user=“root”,cores=[“0”]},{id=“12”,type=“process”,description=“[watchdog
/0]”,user=“root”,cores=[“0”]},{id=“13”,type=“process”,description=“[cpuhp/0]”,user=“root”,cores=[“0”
]},{id=“14”,type=“process”,description=“[cpuhp/1]”,user=“root”,cores=[“1”]},{id=“15”,type=“process”,
description=“[watchdog/1]”,user=“root”,cores=[“1”]},{id=“16”,type=“process”,description=“[migration/
1]”,user=“root”,cores=[“1”]},{id=“17”,type=“process”,description=“[ksoftirqd/1]”,user=“root”,cores=[
“1”]},{id=“18”,type=“process”,description=“[kworker/1:0]”,user=“root”,cores=[“1”]},{id=“19”,type=“pr
ocess”,description=“[kworker/1:0H]”,user=“root”,cores=[“1”]},{id=“20”,type=“process”,description=“[c
puhp/2]”,user=“root”,cores=[“2”]},{id=“21”,type=“process”,description=“[watchdog/2]”,user=“root”,cor
es=[“2”]},{id=“22”,type=“process”,description=“[migration/2]”,user=“root”,cores=[“2”]},{id=“23”,type
=“process”,description=“[ksoftirqd/2]”,user=“root”,cores=[“2”]},{id=“24”,type=“process”,description=
“[kworker/2:0]”,user=“root”,cores=[“2”]},{id=“25”,type=“process”,description=“[kworker/2:0H]”,user="
root",cores=[“2”]},{id=“26”,type=“process”,description=“[cpuhp/3]”,user=“root”,cores=[“3”]},{id=“27”
,type=“process”,description=“[watchdog/3]”,user=“root”,cores=[“3”]},{id=“28”,type=“process”,descript
ion=“[migration/3]”,user=“root”,cores=[“3”]},{id=“29”,type=“process”,description=“[ksoftirqd/3]”,use
r=“root”,cores=[“3”]},{id=“30”,type=“process”,description=“[kworker/3:0]”,user=“root”,cores=[“3”]},{
id=“31”,type=“process”,description=“[kworker/3:0H]”,user=“root”,cores=[“3”]},{id=“32”,type=“process”
,description=“[cpuhp/4]”,user=“root”,cores=[“3”]},{id=“33”,type=“process”,description=“[watchdog/4]”
,user=“root”,cores=[“3”]},{id=“34”,type=“process”,description=“[migration/4]”,user=“root”,cores=[“3”
]},{id=“35”,type=“process”,description=“[ksoftirqd/4]”,user=“root”,cores=[“3”]},{id=“36”,type=“proce
ss”,description=“[kworker/4:0]”,user=“root”,cores=[“3”]},{id=“37”,type=“process”,description=“[kwork
er/4:0H]”,user=“root”,cores=[“3”]},{id=“38”,type=“process”,description=“[cpuhp/5]”,user=“root”,cores
=[“3”]},{id=“39”,type=“process”,description=“[watchdog/5]”,user=“root”,cores=[“3”]},{id=“40”,type=“p
rocess”,description=“[migration/5]”,user=“root”,cores=[“3”]},{id=“41”,type=“process”,description=“[k
softirqd/5]”,user=“root”,cores=[“3”]},{id=“42”,type=“process”,description=“[kworker/5:0]”,user=“root
“,cores=[“3”]},{id=“43”,type=“process”,description=”[kworker/5:0H]”,user=“root”,cores=[“0”]},{id=“44
“,type=“process”,description=”[cpuhp/6]”,user=“root”,cores=[“3”]},{id=“45”,type=“process”,descriptio
n=“[watchdog/6]”,user=“root”,cores=[“3”]},{id=“46”,type=“process”,description=“[migration/6]”,user="
root",cores=[“3”]},{id=“47”,type=“process”,description=“[ksoftirqd/6]”,user=“root”,cores=[“3”]},{id=
“48”,type=“process”,description=“[kworker/6:0]”,user=“root”,cores=[“3”]},{id=“49”,type=“process”,des
cription=“[kworker/6:0H]”,user=“root”,cores=[“0”]},{id=“50”,type=“process”,description=“[cpuhp/7]”,u
ser=“root”,cores=[“3”]},{id=“51”,type=“process”,description=“[watchdog/7]”,user=“root”,cores=[“3”]},
{id=“52”,type=“process”,description=“[migration/7]”,user=“root”,cores=[“3”]},{id=“53”,type=“process”
,description=“[ksoftirqd/7]”,user=“root”,cores=[“3”]},{id=“54”,type=“process”,description=“[kworker/
7:0]”,user=“root”,cores=[“3”]},{id=“55”,type=“process”,description=“[kworker/7:0H]”,user=“root”,core
s=[“0”]},{id=“56”,type=“process”,description=“[kdevtmpfs]”,user=“root”,cores=[“0”]},{id=“57”,type=“p
rocess”,description=“[netns]”,user=“root”,cores=[“2”]},{id=“58”,type=“process”,description=“[kworker
/u16:1]”,user=“root”,cores=[“3”]},{id=“62”,type=“process”,description=“[kworker/u16:2]”,user=“root”,
cores=[“3”]},{id=“84”,type=“process”,description=“[kworker/u16:3]”,user=“root”,cores=[“3”]},{id=“231
“,type=“process”,description=”[irq/100-mc_stat]”,user=“root”,cores=[“0”]},{id=“343”,type=“process”,d
escription=“[kworker/u16:4]”,user=“root”,cores=[“1”]},{id=“685”,type=“process”,description=“[khungta
skd]”,user=“root”,cores=[“3”]},{id=“686”,type=“process”,description=“[oom_reaper]”,user=“root”,cores
=[“2”]},{id=“687”,type=“process”,description=“[writeback]”,user=“root”,cores=[“3”]},{id=“689”,type="
process",description=“[kcompactd0]”,user=“root”,cores=[“1”]},{id=“690”,type=“process”,description=“[
ksmd]”,user=“root”,cores=[“3”]},{id=“691”,type=“process”,description=“[khugepaged]”,user=“root”,core
s=[“1”]},{id=“692”,type=“process”,description=“[kworker/7:1]”,user=“root”,cores=[“3”]},{id=“693”,typ
e=“process”,description=“[crypto]”,user=“root”,cores=[“3”]},{id=“694”,type=“process”,description=“[k
integrityd]”,user=“root”,cores=[“3”]},{id=“695”,type=“process”,description=“[bioset]”,user=“root”,co
res=[“3”]},{id=“696”,type=“process”,description=“[kblockd]”,user=“root”,cores=[“0”]},{id=“714”,type=
“process”,description=“[ata_sff]”,user=“root”,cores=[“3”]},{id=“731”,type=“process”,description=“[kw
orker/6:1]”,user=“root”,cores=[“2”]},{id=“747”,type=“process”,description=“[irq/489-max7762]”,user="
root",cores=[“1”]},{id=“788”,type=“process”,description=“[irq/467-30c0000]”,user=“root”,cores=[“0”]}
,{id=“795”,type=“process”,description=“[kworker/1:1]”,user=“root”,cores=[“1”]},{id=“799”,type=“proce
ss”,description=“[devfreq_wq]”,user=“root”,cores=[“3”]},{id=“818”,type=“process”,description=“[watch
dogd]”,user=“root”,cores=[“3”]},{id=“836”,type=“process”,description=“[nvmap-bz]”,user=“root”,cores=
[“0”]},{id=“960”,type=“process”,description=“[kworker/3:1]”,user=“root”,cores=[“3”]},{id=“961”,type=
“process”,description=“[rpciod]”,user=“root”,cores=[“3”]},{id=“962”,type=“process”,description=“[xpr
tiod]”,user=“root”,cores=[“3”]},{id=“965”,type=“process”,description=“[host_low_prio_w]”,user=“root”
,cores=[“3”]},{id=“966”,type=“process”,description=“[irq/71-host_syn]”,user=“root”,cores=[“1”]},{id=
“967”,type=“process”,description=“[irq/72-host_sta]”,user=“root”,cores=[“0”]},{id=“1010”,type=“proce
ss”,description=“[kswapd0]”,user=“root”,cores=[“2”]},{id=“1012”,type=“process”,description=“[vmstat]
“,user=“root”,cores=[“3”]},{id=“1095”,type=“process”,description=”[nfsiod]”,user=“root”,cores=[“3”]}
,{id=“1134”,type=“process”,description=“[kthrotld]”,user=“root”,cores=[“3”]},{id=“1145”,type=“proces
s”,description=“[kworker/0:1]”,user=“root”,cores=[“0”]},{id=“1146”,type=“process”,description=“[kwor
ker/2:1]”,user=“root”,cores=[“2”]},{id=“1148”,type=“process”,description=“[kworker/1:2]”,user=“root”
,cores=[“1”]},{id=“1205”,type=“process”,description=“[kworker/4:1]”,user=“root”,cores=[“3”]},{id=“12
26”,type=“process”,description=“[irq/87-pva-isr]”,user=“root”,cores=[“3”]},{id=“1229”,type=“process”
,description=“[irq/88-pva-isr]”,user=“root”,cores=[“3”]},{id=“1274”,type=“process”,description=“[teg
radc.0/a]”,user=“root”,cores=[“3”]},{id=“1275”,type=“process”,description=“[tegradc.0/b]”,user=“root
“,cores=[“0”]},{id=“1276”,type=“process”,description=”[tegradc.0/c]”,user=“root”,cores=[“0”]},{id=“1
278”,type=“process”,description=“[tegradc.0/d]”,user=“root”,cores=[“0”]},{id=“1279”,type=“process”,d
escription=“[tegradc.0/e]”,user=“root”,cores=[“0”]},{id=“1281”,type=“process”,description=“[tegradc.
0/f]”,user=“root”,cores=[“0”]},{id=“1285”,type=“process”,description=“[tegradc.0/sl]”,user=“root”,co
res=[“0”]},{id=“1287”,type=“process”,description=“[irq/76-15200000]”,user=“root”,cores=[“1”]},{id=“1
337”,type=“process”,description=“[kworker/5:1]”,user=“root”,cores=[“0”]},{id=“1346”,type=“process”,d
escription=“[irq/475-gk20a_s]”,user=“root”,cores=[“0”]},{id=“1348”,type=“process”,description=“[bios
et]”,user=“root”,cores=[“1”]},{id=“1349”,type=“process”,description=“[bioset]”,user=“root”,cores=[“0
“]},{id=“1350”,type=“process”,description=”[bioset]”,user=“root”,cores=[“3”]},{id=“1351”,type=“proce
ss”,description=“[bioset]”,user=“root”,cores=[“3”]},{id=“1352”,type=“process”,description=“[bioset]”
,user=“root”,cores=[“3”]},{id=“1353”,type=“process”,description=“[bioset]”,user=“root”,cores=[“3”]},
{id=“1354”,type=“process”,description=“[bioset]”,user=“root”,cores=[“0”]},{id=“1355”,type=“process”,
description=“[bioset]”,user=“root”,cores=[“1”]},{id=“1356”,type=“process”,description=“[bioset]”,use
r=“root”,cores=[“3”]},{id=“1357”,type=“process”,description=“[bioset]”,user=“root”,cores=[“3”]},{id=
“1358”,type=“process”,description=“[bioset]”,user=“root”,cores=[“3”]},{id=“1359”,type=“process”,desc
ription=“[bioset]”,user=“root”,cores=[“3”]},{id=“1360”,type=“process”,description=“[bioset]”,user=“r
oot”,cores=[“1”]},{id=“1361”,type=“process”,description=“[bioset]”,user=“root”,cores=[“0”]},{id=“136
2”,type=“process”,description=“[bioset]”,user=“root”,cores=[“3”]},{id=“1363”,type=“process”,descript
ion=“[bioset]”,user=“root”,cores=[“3”]},{id=“1375”,type=“process”,description=“[irq/297-1520000]”,us
er=“root”,cores=[“1”]},{id=“1398”,type=“process”,description=“[bioset]”,user=“root”,cores=[“3”]},{id
=“1401”,type=“process”,description=“[bioset]”,user=“root”,cores=[“3”]},{id=“1404”,type=“process”,des
cription=“[bioset]”,user=“root”,cores=[“0”]},{id=“1407”,type=“process”,description=“[bioset]”,user="
root",cores=[“1”]},{id=“1410”,type=“process”,description=“[bioset]”,user=“root”,cores=[“3”]},{id=“14
13”,type=“process”,description=“[bioset]”,user=“root”,cores=[“3”]},{id=“1416”,type=“process”,descrip
tion=“[bioset]”,user=“root”,cores=[“3”]},{id=“1419”,type=“process”,description=“[bioset]”,user=“root
“,cores=[“3”]},{id=“1426”,type=“process”,description=”[kworker/u17:0]”,user=“root”,cores=[“2”]},{id=
“1430”,type=“process”,description=“[scsi_eh_0]”,user=“root”,cores=[“0”]},{id=“1431”,type=“process”,d
escription=“[scsi_tmf_0]”,user=“root”,cores=[“3”]},{id=“1456”,type=“process”,description=“[nvme]”,us
er=“root”,cores=[“3”]},{id=“1466”,type=“process”,description=“[irq/57-c260000.]”,user=“root”,cores=[
“3”]},{id=“1467”,type=“process”,description=“[spi1]”,user=“root”,cores=[“3”]},{id=“1521”,type=“proce
ss”,description=“[kworker/u16:5]”,user=“root”,cores=[“0”]},{id=“1527”,type=“process”,description=“[i\ xgbe]”,user=“root”,cores=[“3”]},{id=“1550”,type=“process”,description=“[vfio-irqfd-clea]”,user=“root
“,cores=[“3”]},{id=“1591”,type=“process”,description=”[irq/437-ccg_irq]”,user=“root”,cores=[“3”]},{i
d=“1601”,type=“process”,description=“[kworker/4:2]”,user=“root”,cores=[“0”]},{id=“1610”,type=“proces
s”,description=“[irq/99-tegra_rt]”,user=“root”,cores=[“3”]},{id=“1670”,type=“process”,description=“[
cfinteractive]”,user=“root”,cores=[“3”]},{id=“1672”,type=“process”,description=“[read_counters_w]”,u
ser=“root”,cores=[“3”]},{id=“1695”,type=“process”,description=“[irq/54-mmc0]”,user=“root”,cores=[“3”
]},{id=“1704”,type=“process”,description=“[irq/55-mmc1]”,user=“root”,cores=[“3”]},{id=“1790”,type=“p
rocess”,description=“[irq/102-c150000]”,user=“root”,cores=[“3”]},{id=“1800”,type=“process”,descripti
on=“[irq/70-d230000.]”,user=“root”,cores=[“0”]},{id=“1815”,type=“process”,description=“[kworker/3:2]
“,user=“root”,cores=[“3”]},{id=“1835”,type=“process”,description=”[kworker/3:3]”,user=“root”,cores=[
“3”]},{id=“1838”,type=“process”,description=“[kworker/3:4]”,user=“root”,cores=[“3”]},{id=“1841”,type
=“process”,description=“[kworker/3:5]”,user=“root”,cores=[“3”]},{id=“1845”,type=“process”,descriptio
n=“[irq/254-3400000]”,user=“root”,cores=[“3”]},{id=“1847”,type=“process”,description=“[bioset]”,user
=“root”,cores=[“3”]},{id=“1848”,type=“process”,description=“[mmcqd/0]”,user=“root”,cores=[“2”]},{id=
“1849”,type=“process”,description=“[bioset]”,user=“root”,cores=[“3”]},{id=“1850”,type=“process”,desc
ription=“[mmcqd/0boot0]”,user=“root”,cores=[“0”]},{id=“1851”,type=“process”,description=“[bioset]”,u
ser=“root”,cores=[“3”]},{id=“1852”,type=“process”,description=“[mmcqd/0boot1]”,user=“root”,cores=[“0
“]},{id=“1853”,type=“process”,description=”[bioset]”,user=“root”,cores=[“1”]},{id=“1854”,type=“proce
ss”,description=“[mmcqd/0rpmb]”,user=“root”,cores=[“0”]},{id=“1893”,type=“process”,description=“[kwo
rker/u16:6]”,user=“root”,cores=[“1”]},{id=“1900”,type=“process”,description=“[tmp451]”,user=“root”,c
ores=[“3”]},{id=“1903”,type=“process”,description=“[kworker/u17:1]”,user=“root”,cores=[“3”]},{id=“19
07”,type=“process”,description=“[irq/348-rt5659]”,user=“root”,cores=[“3”]},{id=“1956”,type=“process”
,description=“[kworker/6:2]”,user=“root”,cores=[“0”]},{id=“1968”,type=“process”,description=“[kworke
r/5:2]”,user=“root”,cores=[“3”]},{id=“2019”,type=“process”,description=“[ipv6_addrconf]”,user=“root”
,cores=[“3”]},{id=“2021”,type=“process”,description=“[krfcommd]”,user=“root”,cores=[“0”]},{id=“2058”
,type=“process”,description=“[scsi_eh_1]”,user=“root”,cores=[“3”]},{id=“2059”,type=“process”,des
crip
tion=“[scsi_tmf_1]”,user=“root”,cores=[“0”]},{id=“2087”,type=“process”,description=“[kworker/6:3]”,u
ser=“root”,cores=[“3”]},{id=“2088”,type=“process”,description=“[kworker/6:4]”,user=“root”,cores=[“3”
]},{id=“2226”,type=“process”,description=“[irq/113-b950000]”,user=“root”,cores=[“3”]},{id=“2227”,typ
e=“process”,description=“[irq/113-b950000]”,user=“root”,cores=[“3”]},{id=“2230”,type=“process”,descr
iption=“[irq/113-b950000]”,user=“root”,cores=[“0”]},{id=“2238”,type=“process”,description=“[irq/69-b
c00000.]”,user=“root”,cores=[“3”]},{id=“2248”,type=“process”,description=“[irq/79-tegra_dp]”,user=“r
oot”,cores=[“3”]},{id=“2252”,type=“process”,description=“[tegradc.1/a]”,user=“root”,cores=[“3”]},{id
=“2253”,type=“process”,description=“[tegradc.1/b]”,user=“root”,cores=[“3”]},{id=“2254”,type=“process
“,description=”[tegradc.1/c]”,user=“root”,cores=[“3”]},{id=“2255”,type=“process”,description=“[tegra
dc.1/d]”,user=“root”,cores=[“3”]},{id=“2256”,type=“process”,description=“[tegradc.1/e]”,user=“root”,
cores=[“3”]},{id=“2257”,type=“process”,description=“[tegradc.1/f]”,user=“root”,cores=[“3”]},{id=“225
8”,type=“process”,description=“[tegradc.1/sl]”,user=“root”,cores=[“3”]},{id=“2259”,type=“process”,de
scription=“[irq/77-15210000]”,user=“root”,cores=[“3”]},{id=“2262”,type=“process”,description=“[irq/8
0-tegra_dp]”,user=“root”,cores=[“3”]},{id=“2266”,type=“process”,description=“[tegradc.2/a]”,user=“ro
ot”,cores=[“3”]},{id=“2267”,type=“process”,description=“[tegradc.2/b]”,user=“root”,cores=[“3”]},{id=
“2268”,type=“process”,description=“[tegradc.2/c]”,user=“root”,cores=[“3”]},{id=“2269”,type=“process”
,description=“[tegradc.2/d]”,user=“root”,cores=[“3”]},{id=“2270”,type=“process”,description=“[tegrad
c.2/e]”,user=“root”,cores=[“3”]},{id=“2271”,type=“process”,description=“[tegradc.2/f]”,user=“root”,c
ores=[“3”]},{id=“2272”,type=“process”,description=“[tegradc.2/sl]”,user=“root”,cores=[“3”]},{id=
“227
3”,type=“process”,description=“[irq/78-15220000]”,user=“root”,cores=[“3”]},{id=“2275”,type=“process”
,description=“[kworker/7:1H]”,user=“root”,cores=[“3”]},{id=“2276”,type=“process”,description=“[jbd2/
mmcblk0p1-]”,user=“root”,cores=[“3”]},{id=“2277”,type=“process”,description=“[ext4-rsv-conver]”,user
=“root”,cores=[“3”]},{id=“2351”,type=“process”,description=“/lib/systemd/systemd-journald”,user=“roo
t”,cores=[“2”]},{id=“2422”,type=“process”,description=“[kauditd]”,user=“root”,cores=[“3”]},{id=“2531
“,type=“process”,description=”[kworker/1:3]”,user=“root”,cores=[“1”]},{id=“3053”,type=“process”,desc
ription=“/lib/systemd/systemd-udevd”,user=“root”,cores=[“0”]},{id=“3141”,type=“process”,description=
“[kworker/2:2]”,user=“root”,cores=[“2”]},{id=“3191”,type=“process”,description=“[kworker/2:3]”,user=
“root”,cores=[“2”]},{id=“3237”,type=“process”,description=“/lib/systemd/systemd-resolved”,user=“syst
emd-resolve”,cores=[“0”]},{id=“3263”,type=“process”,description=“/lib/systemd/systemd-timesyncd”,use
r=“systemd-timesync”,cores=[“0”]},{id=“3267”,type=“process”,description=“/sbin/rpcbind -f -w”,user="
root",cores=[“0”]},{id=“3289”,type=“process”,description=“/usr/sbin/haveged --Foreground --verbose=1
-w 1024”,user=“root”,cores=[“1”]},{id=“4440”,type=“process”,description=“/usr/bin/dbus-daemon --sys
tem --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only”,user=“messagebus”,co
res=[“3”]},{id=“4471”,type=“process”,description=“/usr/sbin/NetworkManager --no-daemon”,user=“root”,
cores=[“1”,“2”]},{id=“4477”,type=“process”,description=“avahi-daemon: running [jetson-0423818077348.
local]”,user=“avahi”,cores=[“0”]},{id=“4479”,type=“process”,description=“/usr/lib/snapd/snapd”,user=
“root”,cores=[“0”,“1”,“2”,“3”]},{id=“4505”,type=“process”,description=“/usr/lib/accountsservice/acco
unts-daemon”,user=“root”,cores=[“2”]},{id=“4513”,type=“process”,description=“/usr/sbin/rsyslogd
-n”,
user=“syslog”,cores=[“1”,“2”]},{id=“4521”,type=“process”,description=“/usr/bin/python3 /usr/bin/netw
orkd-dispatcher --run-startup-triggers”,user=“root”,cores=[“1”]},{id=“4570”,type=“process”,descripti
on=“/usr/lib/udisks2/udisksd”,user=“root”,cores=[“0”,“1”,“2”,“3”]},{id=“4580”,type=“process”,descrip
tion=“/sbin/wpa_supplicant -u -s -O /run/wpa_supplicant”,user=“root”,cores=[“2”]},{id=“4624”,type=“p
rocess”,description=“/usr/sbin/ModemManager”,user=“root”,cores=[“0”,“3”]},{id=“4648”,type=“process”,
description=“/usr/sbin/cupsd -l”,user=“root”,cores=[“2”]},{id=“4683”,type=“process”,description=“/li
b/systemd/systemd-logind”,user=“root”,cores=[“3”]},{id=“4731”,type=“process”,description=“/usr/sbin/
cron -f”,user=“root”,cores=[“0”]},{id=“4776”,type=“process”,description=“avahi-daemon: chroot helper
“,user=“avahi”,cores=[“3”]},{id=“4820”,type=“process”,description=”[file-storage]”,user=“root”,cores
=[“3”]},{id=“4861”,type=“process”,description=“[kworker/4:3]”,user=“root”,cores=[“3”]},{id=“4899”,ty
pe=“process”,description=“/usr/lib/policykit-1/polkitd --no-debug”,user=“root”,cores=[“0”,“1”,“3”]},
{id=“4919”,type=“process”,description=“[kworker/0:1H]”,user=“root”,cores=[“0”]},{id=“4942”,type=“pro
cess”,description=“[loop0]”,user=“root”,cores=[“3”]},{id=“4958”,type=“process”,description=“/usr/lib
/cups/notifier/dbus dbus://”,user=“lp”,cores=[“3”]},{id=“4970”,type=“process”,description=“/usr/lib/
cups/notifier/dbus dbus://”,user=“lp”,cores=[“3”]},{id=“4971”,type=“process”,description=“[kworker/0
:2]”,user=“root”,cores=[“0”]},{id=“4972”,type=“process”,description=“/usr/lib/cups/notifier/dbus dbu
s://”,user=“lp”,cores=[“1”]},{id=“4974”,type=“process”,description=“/usr/lib/cups/notifier/dbus dbus
://”,user=“lp”,cores=[“2”]},{id=“4975”,type=“process”,description=“/usr/lib/cups/notifier/dbus dbus:
//”,user=“lp”,cores=[“3”]},{id=“4976”,type=“process”,description=“[kworker/0:3]”,user=“root”,cores=[
“0”]},{id=“4977”,type=“process”,description=“/usr/lib/cups/notifier/dbus dbus://”,user=“lp”,cores=["
3"]},{id=“4978”,type=“process”,description=“/usr/lib/cups/notifier/dbus dbus://”,user=“lp”,cores=[“2
“]},{id=“4981”,type=“process”,description=”/usr/lib/cups/notifier/dbus dbus://”,user=“lp”,cores=[“1”
]},{id=“4982”,type=“process”,description=“/usr/lib/cups/notifier/dbus dbus://”,user=“lp”,cores=[“0”]
},{id=“4985”,type=“process”,description=“[kworker/7:2]”,user=“root”,cores=[“0”]},{id=“4995”,type=“pr
ocess”,description=“/usr/lib/cups/notifier/dbus dbus://”,user=“lp”,cores=[“2”]},{id=“5029”,type=“pro
cess”,description=“[kworker/5:1H]”,user=“root”,cores=[“3”]},{id=“5142”,type=“process”,description=“/
usr/sbin/dhcpd -cf /opt/nvidia/l4t-usb-device-mode/dhcpd.conf -pf /opt/nvidia/l4t-usb-device-mode/dh
cpd.pid -lf /opt/nvidia/l4t-usb-device-mode/dhcpd.leases”,user=“root”,cores=[“3”]},{id=“5239”,type="
process",description=“[sugov:0]”,user=“root”,cores=[“0”]},{id=“5248”,type=“process”,description=“[kw
orker/6:1H]”,user=“root”,cores=[“3”]},{id=“5322”,type=“process”,description=“[kworker/7:3]”,user=“ro
ot”,cores=[“3”]},{id=“5412”,type=“process”,description=“/usr/bin/whoopsie -f”,user=“whoopsie”,cores=
[“2”,“3”]},{id=“5439”,type=“process”,description=“/usr/sbin/kerneloops --test”,user=“kernoops”,cores
=[“0”]},{id=“5479”,type=“process”,description=“/usr/sbin/sshd -D”,user=“root”,cores=[“0”]},{id=“5481
“,type=“process”,description=”/usr/sbin/kerneloops”,user=“kernoops”,cores=[“1”]},{id=“5556”,type=“pr
ocess”,description=“[kworker/5:3]”,user=“root”,cores=[“3”]},{id=“5626”,type=“process”,description=“[
irq/473-3610000]”,user=“root”,cores=[“3”]},{id=“5627”,type=“process”,description=“[irq/474-3610000]”
,user=“root”,cores=[“3”]},{id=“5840”,type=“process”,description=“/sbin/dhclient -d -q -sf /usr/lib/N
etworkManager/nm-dhcp-helper -pf /run/dhclient-eth0.pid -lf /var/lib/NetworkManager/dhclient-afdf28a
b-fa9f-3b94-a909-2e2772e45c65-eth0.lease -cf /var/lib/NetworkManager/dhclient-eth0.conf eth0”,user="
root",cores=[“0”]},{id=“5958”,type=“process”,description=“/usr/sbin/nvargus-daemon”,user=“root”,core
s=[“2”,“3”]},{id=“6019”,type=“process”,description=“/bin/login -f”,user=“root”,cores=[“0”]},{id=“602
7”,type=“process”,description=“/sbin/agetty -o -p – \u --keep-baud 115200,38400,9600 ttyGS0 vt220”
,user=“root”,cores=[“0”]},{id=“6053”,type=“process”,description=“[kworker/4:1H]”,user=“root”,cores=[
“0”]},{id=“6143”,type=“process”,description=“[kworker/1:1H]”,user=“root”,cores=[“1”]},{id=“6171”,typ
e=“process”,description=“/usr/sbin/nvphsd”,user=“root”,cores=[“0”]},{id=“6176”,type=“process”,descri
ption=“/usr/sbin/gdm3”,user=“root”,cores=[“0”,“1”,“2”]},{id=“6177”,type=“process”,description=“[nvgp
u_channel_p]”,user=“root”,cores=[“1”]},{id=“6179”,type=“process”,description=“/usr/sbin/nvphsd”,user
=“root”,cores=[“3”]},{id=“6283”,type=“process”,description=“[kworker/2:1H]”,user=“root”,cores=[“2”]}
,{id=“6432”,type=“process”,description=“gdm-session-worker [pam/gdm-autologin]”,user=“root”,cores=["
0",“1”,“3”]},{id=“6478”,type=“process”,description=“/lib/systemd/systemd --user”,user=“nvidia”,cores
=[“2”]},{id=“6508”,type=“process”,description=“(sd-pam)”,user=“nvidia”,cores=[“0”]},{id=“6573”,type=
“process”,description=“-bash”,user=“nvidia”,cores=[“3”]},{id=“6577”,type=“process”,description=“/usr
/bin/gnome-keyring-daemon --daemonize --login”,user=“nvidia”,cores=[“0”,“1”,“2”,“3”]},{id=“6581”,typ
e=“process”,description=“/usr/lib/gdm3/gdm-x-session --run-script /usr/lib/gnome-session/run-systemd
-session unity-session.target”,user=“nvidia”,cores=[“1”,“2”]},{id=“6583”,type=“process”,description=
“/usr/lib/xorg/Xorg vt1 -displayfd 3 -auth /run/user/1001/gdm/Xauthority -background none -noreset -
keeptty -verbose 3”,user=“root”,cores=[“1”,“2”]},{id=“6605”,type=“process”,description=“/usr/bin/dbu
s-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only”,user=
“nvidia”,cores=[“1”]},{id=“6608”,type=“process”,description=“/bin/sh /usr/lib/gnome-session/run-syst
emd-session unity-session.target”,user=“nvidia”,cores=[“1”]},{id=“6711”,type=“process”,description="
/usr/bin/ssh-agent /usr/bin/im-launch env LD_PRELOAD=libgtk3-nocsd.so.0 /usr/lib/gnome-session/run-s
ystemd-session unity-session.target",user=“nvidia”,cores=[“0”]},{id=“6802”,type=“process”,descriptio
n=“systemctl --user start --wait unity-session.target”,user=“nvidia”,cores=[“2”]},{id=“6813”,type=“p
rocess”,description=“/usr/lib/aarch64-linux-gnu/indicator-sound/indicator-sound-service”,user=“nvidi
a”,cores=[“0”,“1”,“2”,“3”]},{id=“6814”,type=“process”,description=“/usr/lib/aarch64-linux-gnu/indica
tor-session/indicator-session-service”,user=“nvidia”,cores=[“0”,“1”,“3”]},{id=“6815”,type=“process”,
description=“/usr/lib/aarch64-linux-gnu/indicator-datetime/indicator-datetime-service”,user=“nvidia”
,cores=[“0”,“1”,“3”]},{id=“6817”,type=“process”,description=“/usr/lib/aarch64-linux-gnu/indicator-ap
plication/indicator-application-service”,user=“nvidia”,cores=[“0”,“1”]},{id=“6818”,type=“process”,de
scription="/usr/lib/aarch64-linux-gnu/indicator-power/indicator-power-service&quot

Hi,

This is a known issue and is fixed in the CUDA-10.1. You can find more discussion here:
[url]https://devtalk.nvidia.com/default/topic/1037598/jetson-tx2/cudamalloc-killed-on-tx2-and-the-memory-can-not-be-cudafree-real[/url]

Unfortunately, CUDA-10.1 is not available for Jetson user yet.
Please pay attention to our announcement for the future release.

Thanks.

Hi,

In continuation of your reply , that the fix exists in cuda 10.1 ( cuda 10.1 is not there for jetson ).
Can I run cuda 10.1 on host Machine and cuda 10.0 on jetson , I guess yes.

please confirm.

Regards
francis j

Hi,

In continuation of your posting , I did install the newly released JetPack 4.2 ( released 3 days before by you , which does not contain 10.1 as you have mentioned the bugs are fixed above but only cuda 10.0 - exactly 10.0.166) on Jetson Xavier and host ubuntu 18.04 and tried debugging the cuda samples code.

The errors I have reported above are also present in this version of JetPack 4.2 for Jetson Xavier.

When can I expect a JetPack with cuda 10.1 with remote nsight debug problems fixed or have I missed out something in the installation ?

please confirm.

Work-around : direct card Jetson Xavier , compile ( which might take a long time ) and debug with nsight on the card itself , should work ?

Regards
francis j

Hi,

Sorry that we cannot disclose our schedule here.
We will check with the developer to see if any workaround can be applied currently.

Thanks.

Hi,

JetPack 4.é tried:

a.cuda-gdb ( without nsight ) on remote host + cuda_gdbserver on Jetson Xavier , same errors as reported above in the first post.
b.cuda-gdb ( without nsight ) direct on Jetson Xavier - same errors as reported above in the first post.

Beyond this I think I have exhausted all the debug possibilites on the cuda device.

---- As you have replied above to check with your internal team for any work-around , please do check and
---- suggest work-arounds.

Regards
francis j

Hi,

Sorry for the late update.

There is a known issue of some internal permission restriction.
All the CUDA tool requires the root authority from v10.0.

Could you try to execute cuda-gdb as root to see if any helps.
Thanks and please also let us know the result.

Hi,

I have tried nsight debug of cuda examples with sudo rights but only in JetPack version 4.1.1
and same errors.

But not done on JetPack 4.2 yet , Shall repeat all the 3 configurations with sudo rights on the
latest JetPack 4.2 and post later.

Regards
francis j

Hi,

As per your request I have tried the latest JetPack 4.2 with cuda examples in

  1. mode 1 with nsight eclipse - same errors reported in first post.
  2. mode 2 with cuda-gdb remote on host PC + cuda-gdb server on target JETSON XAVIER - cuda single stepping is fine . I think with this native debugging we can use this NVIDIA JETSON.

Bye the bye I used suo permissions on both host and target cuda launch commands.

Regards & thanks
francis j

Hi,

Thanks for your feedback.

1. Good to know that mode 2 works fine currently.

2. The issue in mode 1 is fixed in the CUDA 10.1 as mentioned in comment#2.
Please wait for our next JetPack release for the fix.

Thanks.

Hi,

Here is a relevant topic for your reference:
[url]https://devtalk.nvidia.com/default/topic/1049619/jetson-tx2/debugging-cross-compiled-cuda-sample-does-not-work-/[/url]

Thanks.

Hi,

With relevant to your new post with a new link

https://devtalk.nvidia.com/default/topic/1049619/jetson-tx2/debugging-cross-compiled-cuda-sample-does-not-work-/

As described in the link , I did try running nsight with sudo permissions - sshd with Root Permission enabled in ssh_config - JetPack 4.2 on Jetson Xavier but same errors as in first post.

Regards
francis j

Hi,

With respect to my previous post , I tried an addition sudo cuda-gdbserver launch on Jetson Xavier + sudo nsight on Host Ubuntu 18.04 and I am able to compile and debug CPU + GPU including all single step ( error and crash on the GPU before as posted in my first post ) using nsight - for one of your cuda examples

  1. sudo cuda-gdbserver native launch on Jetson Xavier on the correct port and the example application compiled on the host side and copied to the remote Jetson Xavier using scp command. ( The source code remains on the host PC where I compiled this using JetPack 4.2 cuda 10.0 version ).

  2. From your link you provided one post above my previous post ( - for the sshd config parameter changed as per the link sent by you – https://devtalk.nvidia.com/default/topic/1049619/jetson-tx2/debugging-cross-compiled-cuda-sample-does-not-work-/ ) i.e PermitRootLogin to be enabled in /etc/ssh/ssh_config to be yes both sides on the host Ubuntu 18.04 and on the Jetson Xavier ).

  3. nsight launched with sudo permission and code compiled on the host Ubuntu 18.04 - an example from your cuda samples code.

Nsight debugging is fine in this configuration for JetPack 4.2 on Jetson Xavier - cuda 10.0 for a cuda example.

Thanks
francis j

Hi, francis j

Thanks for your feedback.

hi,

In contd on the same bug eclipse nsight debugger , the same bug exists in your recent SDK manager versions v0.9.12 and V0.9.13. 

The work-around suggested by me in the above post dated : Posted 04/11/2019 03:30 PM  works well for debugging all the cuda GPU and CPU code also that I have worked till now and also many of your examples.

Hope you could fix this bug in your future releases.

Right from version V0.9.11 , v0.9.12 , v0.9.13 SDK manager there is a similar bug in eclipse visual profiler
to profile the GPU code. Effectively it logsout when I launc the visual profiles from eclipse debugger.

However if I do the profiling in off-line mode
1. gather all the metrics using nvprof.
2. Visualise tghe results of the metrics captured and the time line using visual profiler off-line ( not within eclipse debugger ) I am able to see all the results.

Q: Can you give me pointers to fix this bug in eclipse online visual profiler ?

Regards
francis j

Hi,

We double check your code today, the log looks strange.

core = 4 >= num_cores = 4!

Could you share which CUDA sample do you use for the Nsight EE with us?
We want to reproduce this issue on our environment again.

Thanks.

Hi,

   Detail of the profiler issue in Eclipse debugger: A repeat post ( not the debugging issue in GPU code 

as reported in previous posts in the same thread above for which I have a correction which works well on all the cuda + CPU code debugged so far). This bug exists from JetPack 4.1.1 , upto the latest V.0.9.13 SDK Manager.

Error:


On-line debugger profiler : When the profiler is launched from eclipse debugger , a direct logout message thats all

Off-line visual profiler : able to get all the metrics using nvprof and the time line also. Visual profiler off-line is able to display the results correctly.

Setup:
Ubuntu 18.04 Host Machine - to compile and do remote cuda-dbg on the board jetson xavier

  • Host PC ubuntu 18.04 has all of cuda and all other packages from JetPack 4.1.1 installed
    Jetson Xavier - all of JetPack 4.11 , OS flash and other cuda … etc installed on the Jetson Xavier.

Jetpack update tpo the latest SDKManager v 0.9.13:

Right from JetPack SDKManager version 0.9.11 , 0.9.12 , 0.9.13 - the same eclipse visual profiler error exists.

I have taken the cuda sample examples and copied to ~/NVIDIA_samples.
All the libraries and settings of nsight - are done as per the Jetson TX2 described in a nvidia doc

Example :
I have tried many examples all have the same eclipse profiler issue. . Take the
convolution texture from NVIDIA_CUDA-10.0_Samples/3_Imaging/convolutionTexture

When I launch the profiler in eclipse debugger a direct logout message in the console thats all

Hope you could help me debug this profiler error
Regards
francis j

Hi,

So sorry to keep you waiting.
We got more information from our internal team recently.

This issue is not from CUDA-GDB but CPU GDB.
Since we are going to upgrade GDB baseline from 7.12 to 8.2, this issue will be checked directly on the 8.2.
The 8.2 upgrade is planed to included in CUDA 11.0.

Thanks.

Hi,

copy that good to know that you have fixed the real-time profiling error using eclipse profiler of cuda
kernel code .

Also I hope you have included or fixed the cuda debugger error that I reported earlier ( my work-around was 
stepping through the cuda kernel code using sudo permission to launch the cuda gdb from jetson Xavier and 
grabbing the process on  eclipse debugger on host using remote debugger mode on eclipse).

Good shall wait for the 8.2 upgrade on GDB baseline in cuda 11.0.

Regards
francis j

Hi,

Thanks for your response.

The permission limitation is from CUDA 10.0 and is caused by some internal problem.
This issue is also tracking by our internal team. However, we don’t have a concrete schedule for the fix right now.

Thanks.