I have try many way to remote debug with Nsight Eclipse Edition,but still failed,I want to know why?
I select upload local executable,debug remote executable,do not transfer executable on debug configurations
however,at the beginning of debugging, the software crashes.
so I want to know how to remote debug correctly with xavier.
Thanks!
Hi,
Suppose you can set up the environment similar to this tutorial:
[url]https://devblogs.nvidia.com/cuda-jetson-nvidia-nsight-eclipse-edition/[/url]
If you still meet error with the instruction above, could you share some log with us?
Thanks.
thanks.
I follow the tutorial,but still meet error.
I create a cuda clock sample project.And set breakpoint at kernel.
when I click resume,it can not stop at breakpoint
there is some log on remote shell:
Last login: Sat Dec 8 07:11:41 2018 from 192.168.1.100
echo $PWD’>’
/bin/sh -c “cd "/home/nvidia/hello/Debug";export NVPROF_TMPDIR="/tmp";"/usr/local/cuda-10.0/bin/cuda-gdbserver" :2345 "/home/nvidia/hello/Debug/hello"”;exit
nvidia@jetson-0423718017107:~$ echo $PWD’>’
/home/nvidia>
nvidia@jetson-0423718017107:~$ /bin/sh -c “cd "/home/nvidia/hello/Debug";exportt NVPROF_TMPDIR="/tmp";"/usr/local/cuda-10.0/bin/cuda-gdbserver" :2345 "/homme/nvidia/hello/Debug/hello"”;exit
Process /home/nvidia/hello/Debug/hello created; pid = 10183
Listening on port 2345
Remote debugging from host 192.168.1.100
Warning: Adjusting return value of linux_common_core_of_thread (pid=32, tid=32).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=33, tid=33).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=34, tid=34).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=35, tid=35).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=37, tid=37).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=38, tid=38).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=39, tid=39).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=40, tid=40).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=41, tid=41).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=44, tid=44).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=45, tid=45).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=46, tid=46).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=47, tid=47).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=50, tid=50).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=51, tid=51).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=52, tid=52).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=53, tid=53).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=55, tid=55).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=686, tid=686).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=690, tid=690).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=693, tid=693).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=694, tid=694).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=731, tid=731).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=818, tid=818).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=961, tid=961).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1011, tid=1011).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1094, tid=1094).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1226, tid=1226).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1229, tid=1229).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1351, tid=1351).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1352, tid=1352).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1357, tid=1357).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1358, tid=1358).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1363, tid=1363).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1364, tid=1364).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1411, tid=1411).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1414, tid=1414).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1417, tid=1417).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1438, tid=1438).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1439, tid=1439).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1468, tid=1468).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1470, tid=1470).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1560, tid=1560).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1593, tid=1593).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1607, tid=1607).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1612, tid=1612).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1672, tid=1672).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1674, tid=1674).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1702, tid=1702).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1792, tid=1792).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1849, tid=1849).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1854, tid=1854).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1855, tid=1855).
core = 7 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1856, tid=1856).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1905, tid=1905).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=1908, tid=1908).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2231, tid=2231).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2233, tid=2233).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2234, tid=2234).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2235, tid=2235).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2236, tid=2236).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2237, tid=2237).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2238, tid=2238).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2239, tid=2239).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2242, tid=2242).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=2250, tid=2250).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=3230, tid=3230).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=4586, tid=4586).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=4586, tid=5505).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=4911, tid=4911).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=4981, tid=4981).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=5015, tid=5015).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=5060, tid=5060).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=5235, tid=5235).
core = 4 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=5387, tid=5468).
core = 6 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=5561, tid=5561).
core = 5 >= num_cores = 4!
Warning: Adjusting return value of linux_common_core_of_thread (pid=5562, tid=5562).
core = 5 >= num_cores = 4!
CUDA Clock sample
GPU Device 0: “Xavier” with compute capability 7.2
and gdb log:
Coalescing of the CUDA commands output is off.
warning: “remote:” is deprecated, use “target:” instead.
warning: sysroot set to “target://”.
Reading /lib/ld-linux-aarch64.so.1 from remote target…
warning: File transfers from remote targets can be slow. Use “set sysroot” to access files locally instead.
Reading /lib/ld-linux-aarch64.so.1 from remote target…
Reading /lib/ld-2.27.so from remote target…
Reading /lib/.debug/ld-2.27.so from remote target…
0x0000007fb7fd31c0 in ?? () from target:/lib/ld-linux-aarch64.so.1
$1 = 0xff
The target endianness is set automatically (currently little endian)
Reading /lib/aarch64-linux-gnu/librt.so.1 from remote target…
Reading /lib/aarch64-linux-gnu/libpthread.so.0 from remote target…
Reading /lib/aarch64-linux-gnu/libdl.so.2 from remote target…
Reading /usr/lib/aarch64-linux-gnu/libstdc++.so.6 from remote target…
Reading /lib/aarch64-linux-gnu/libgcc_s.so.1 from remote target…
Reading /lib/aarch64-linux-gnu/libc.so.6 from remote target…
Reading /lib/aarch64-linux-gnu/libm.so.6 from remote target…
Reading /lib/aarch64-linux-gnu/librt-2.27.so from remote target…
Reading /lib/aarch64-linux-gnu/.debug/librt-2.27.so from remote target…
Reading /lib/aarch64-linux-gnu/47f37309461cc15fb1915bc198d718017a1f87.debug from remote target…
Reading /lib/aarch64-linux-gnu/.debug/47f37309461cc15fb1915bc198d718017a1f87.debug from remote target…
Reading /lib/aarch64-linux-gnu/libdl-2.27.so from remote target…
Reading /lib/aarch64-linux-gnu/.debug/libdl-2.27.so from remote target…
Reading /usr/lib/aarch64-linux-gnu/ec888879161599d09ef86dc9b55f5096935334.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/.debug/ec888879161599d09ef86dc9b55f5096935334.debug from remote target…
Reading /lib/aarch64-linux-gnu/866070a0bdee074a8459fae7b95155fdd12861.debug from remote target…
Reading /lib/aarch64-linux-gnu/.debug/866070a0bdee074a8459fae7b95155fdd12861.debug from remote target…
Reading /lib/aarch64-linux-gnu/libc-2.27.so from remote target…
Reading /lib/aarch64-linux-gnu/.debug/libc-2.27.so from remote target…
Reading /lib/aarch64-linux-gnu/libm-2.27.so from remote target…
Reading /lib/aarch64-linux-gnu/.debug/libm-2.27.so from remote target…
94 …/src/clock.cu: No such file or directory.
Temporary breakpoint 2, main (argc=1, argv=0x7ffffff4f8) at …/src/clock.cu:94
Reading /usr/lib/aarch64-linux-gnu/tegra/libcuda.so.1 from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvrm_gpu.so from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvrm.so from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvrm_graphics.so from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvidia-fatbinaryloader.so.31.1.0 from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvos.so from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libcuda.so.1.1.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/.debug/libcuda.so.1.1.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvrm_gpu.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/.debug/libnvrm_gpu.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvrm.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/.debug/libnvrm.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvrm_graphics.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/.debug/libnvrm_graphics.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvidia-fatbinaryloader.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/.debug/libnvidia-fatbinaryloader.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/libnvos.so.debug from remote target…
Reading /usr/lib/aarch64-linux-gnu/tegra/.debug/libnvos.so.debug from remote target…
Just an observation…it couldn’t find this:
94 ../src/clock.cu: No such file or directory.
Normally, on the Xavier, this would be:
/usr/local/cuda-10.0/samples/0_Simple/clock/clock.cu
Perhaps there is a way to tell eclipse where to find this.
Hi
Thank you for reply.
the log show
94 ../src/clock.cu: No such file or directory.
however,I find the clock.cu already exists.
The file path is:/home/nvidia/hello/Debug/…/src/clock.cu
So I don’t know why the log show “can’t find a source file”
Yes…but is it looking on the Xavier, or on the PC? Quite often a debugger needs to know the source code of the program being debugged…and if debugging natively, then it is always on the same machine. In your case it could be that the sample code is on the wrong machine for the debugger.
Hi linuxdev!
the file path on the Xavier is:/home/nvidia/hello/Debug/…/src/clock.cu
the file path on the PC is:/home/cxx/cuda-workspace/hello/src
the source file already exists
And I follow the nsight remote debug tutorial.so I don’t know how to configure
I can’t answer the configuration question, but somewhere in there it probably needs either the root of the path to be the same or else an adjustment in the nsight settings. For example, perhaps on the PC you could (using sudo as needed and then chown to whoever is running nsight):
mkdir -p /home/nvidia
cd /home/nvidia
ln -s /home/cxx/cuda-workspace/hello .
Thanks.
Now,nsight can find source file.
But,Xavier remote debug still failed.
the log show the same as #3.And nsight crashes when running debugging
Hi,
This is a known issue and is already fixed in CUDA10.1.
You can get some information from our previous topic here:
[url]https://devtalk.nvidia.com/default/topic/1037598[/url]
CUDA 10.1 is not available currently.
Please wait for our announcement for the next JetPack release.
Thanks.
I see that Cuda 10.1 is available for the “host” Ubuntu machine quite easily. I am worried to upgrade the host because the target Jetson Xavier doesn’t seem to have an easy manner of updating to Cuda 10.1.
I’d like to switch to the “latest and greatest” (of course) and also do so without entirely reimaging my Xavier - as that would be a pain.
Please advise if:
- I can update the host to Cuda 10.1 without endangering the ability to remote debug, etc, on the Jetson(s) that I have.
- There is some way to update the Jetson (in my case I have an Xavier, a TX2 and a Nano) to 10.1 manually and if this will not screw up the various other dev stuff on the Jetson.
Etc.
Thanks.
You should find CUDA subdirectories in “/usr/local/”. Each one has a version on it and should be independent. However, there is a symbolic link “/usr/local/cuda/” which points to one of the versions, and so if you simply have “/usr/local/cuda/” in a named path this is the one that gets used. If you edit any configuration naming “/usr/local/cuda/” and instead point it at a specific version, then the content used in compile should be from that version.
Hi,
My host OS ubuntu 1804, GPU RTX 2060, target Xavier DevKit and I have same problem, and I don`t want come to cuda 10.1, there are many developers who downgrade from 10.1 to 10.0. So, are workaround ?
Thanks.