Jetson TK1 – IDE for cuda development

Is “Nvidia-Nsight-Eclipse-Edition” the only possibility to develop with an IDE for the Jetson TK1?

Are there any other “working” alternatives? For example QT Project?

How is your development setup?

Thanks for your answer.

You don’t have to use just Nsight to develop anything.
You can use whatever IDE of your choice. The only limitation is your knowledge of your IDE.
(Minimum of knowing the nvcc commands to be part of your build script).

Or the lazy way which I do, is to compile the CUDA stuff as a library and then link them.

I’m a little disappointed the NSight ide doesn’t exist on TK1, but I understand why it doesn’t. I assume the command line tools (nvcc, nvvp, etc) are the same thing, just without the Eclipse GUI. I’m just getting started with it though, so perhaps I’m mistaken.

Out of curiosity, is the source code available for the NSight Eclipse edition anywhere? We might be able to build it ourselves if so.

Is it possible to install „NVIDIA Nsight Eclipse” inside a Virtual Machine” for remote development?

I tried it (Ububtu 12.04) but it failed for some reason but I have not so much experience with Linux. It seems to be that a cuda supported GPU must be available. But why? I want to develop remotely.

@Ramsey: If it would be possible to run „NVIDIA Nsight Eclipse” on the Jeston itself this would be a solution. On the other hand how to debug the cuda code on the Jetson without „NVIDIA Nsight Eclipse”?

Thanks.

Yes, it is possible to install under a VM.

Nsight Eclipse include remote application launcher, so you can launch/debug directly on the Jetson.

To debug CUDA, you need to turn off the X11 server on the Jetson.

I have noticed one bug which is POSSIBLY related to this…not sure. It is only when using remote display.

The way remote display is supposed to work with X11 is that all computing except display goes on in the remote machine, and then your local work station displays the result. Such as via “ssh -Y”. In the case of hardware accelerated display, the remote system cannot do this, as it has no direct hardware access to the local machine video hardware (glxinfo directly on the machine displaying shows hardware accel, glxinfo run via ssh -Y shows hardware accel disappears, as should be the case).

In the case of CUDA, I found the API mistaken about compute facility versus rendering…that is, CUDA is not for remote display or video rendering, and the application I compiled would not work because it demanded CUDA on my display/work station machine. Apparently because of the GPU being associated with video CUDA was treated as display even though it is not…Jetson had access to CUDA hardware, but assumed it was video and needed the work station to do this. This is a possible snag for clusters, as this means there are configurations where CUDA may accidentally run on the master node when it is assumed to run on the remote node…so far as I know this would only occur if using ssh -Y or other remote display ability.

I asked about this in the CUDA forum in less detail, but got no response. At the time I did not understand as well what was going on, so my description was less detailed. The interesting thing is that had CUDA of the proper version been available on my work station, the software may have run without me knowing it was running from my work station.