Cross Compiling & Installation Issue

I’m just getting up and running with my Jetson and I have a question about my some errors and running an application/cross-compiling.

Right now, I have my set up as follows:

- -

I can work either directly on the Jetson via the monitor or I can ssh into the Jetson on my laptop.

I have Nsight and nvcc installed on my laptop. I wrote an application, selected target CPU architecture as ARM (Project → Properties → Target Systems → CPU Architecture → ARM) and compiled. I then moved the application over to the Jetson, tried to run it (./app) and it failed. I’m getting a “./app: command not found”

I think something is up with my installation, though. I can run all examples in the “GameWorksOpenGLSamples” no problem… but I can’t run any of the examples in “NVIDIA_CUDA-6.5_Samples/bin/arm7l/linux/release/gnueabihf” folder. The errors run the gamut:

If I try to run “matrixMulCUBLAS”:

“./matrixMulCUBLAS: error while loading shared libraries: libcublas.so.6.5: cannot open shared object file: no such file or directory”

If I try to run “simpleZeroCopy”:

"CUDA error at …/…/common/inc/helper_cuda.h:1163 code=35(cudaErrorInsufficientDriver) “cudaGetDevice(&dev)”

If I run “cudaOpenMP”:

"./cudaOpenMP Starting…

no CUDA capable devices were detected"

… this is running ON the Jetson…

I installed via the Jetpack. Is my installation just corrupt?

What does file say about your app?

file ./app

Which Linux for Tegra release are you using?

I don’t have my Jetson with me right now, but if I do file ./app on my laptop I get:

./app: ELF 32-bit LSB  executable, ARM, EABI5 version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=5a8d665767c1893f83ff3ca404fc48bed93b7a41, not stripped

I’m using L4T version r21.2-6-5-prod.

Thanks!

Any ideas?

The original post is rather old. Some updates on L4T version and how things were installed (such as Jetpack) would help. Back in the original R19.x days CUDA 6.5 was not available on Jetson…this became available with R21.x. On all versions if you are doing a remote display some of the graphics functions and CUDA are mixed up and it mistakenly tries to run CUDA on the display machine instead of Jetson (mistakenly thinking CUDA is graphics but in reality is only GPU).

I’m not sure what do you mean regarding the original post? I posted it just last night!

I have L4T version r21.2-6-5-prod. I installed via Jetpack.

I’m not doing a remote display. The display is plugged directly into the HDMI port of the Jetson.

I build the file on my laptop. I then move the file over to the Jetson via a USB thumb drive.

The L4T version in latest JetPack is r21.3. Can you please check if you are using the latest JetPack?
https://developer.nvidia.com/jetson-tk1-development-pack

“r21.2-6-5-prod” sounds odd. Where did you get that from?

$ head -1 /etc/nv_tegra_release                                                                          
# R21 (release), REVISION: 3.0, GCID: 5091063, BOARD: ardbeg, EABI: hard, DATE: Tue Feb  3 02:03:23 UTC 2015

My mistake…don’t know why I saw the date as a year ago instead of this year. Unfortunately my host is Fedora instead of Ubuntu, so I can’t use Jetpack. But the output from “file” says architecture is right. I find that when cross compiling though quite often the application will be linked against the wrong libraries, which might account for the issues.

Just for a bit of knowledge, if I loopback mount my raw Jetson image on my x86_64 host I can get the same output from the “file” command from either Jetson or host (the loopback image is the actual file system from the Jetson made available to host). It becomes drastically different when running “ldd” on the executables (I believe that ldd actually does some code traversal/execution when determining linking). When a properly working Jetson binary has ldd run on it, I get a list of linked libraries…when I do the same under x86_64 for the same binary it says it is not an executable. It sounds like perhaps there was linking to the wrong thing when built in your cross compile environment.

Just as a test, run “ldd” against your app from both host and Jetson. See if they both say it isn’t executable while file command says it is.

Very interesting. I did a

head -1 /etc/nv_tegra_release

and got

R19 (release), RELEASE: 2.0, etc...

Also, when I do an ldd on the file on my laptop, I get

Not a dynamic executable

When I do it on the file on Jetson I get a huge list of libraries I’m assuming are linked. So I guess this means that I am correctly cross-compiling, but I’m using an old version of L4T.

Question: Do I need to uninstall the previous CUDA/Jetpack installation or can I just run the latest Jetpack installer and let it update?

CUDA 6.5 will not run on R19.x. R21.x is required.

With ldd returning information under Jetson, the cross compile is probably fine and using either CUDA 6.0 or flashing Jetson to R21.x (making it CUDA 6.5 compatible) would do the job.

Since I don’t have an Ubuntu host I’m not sure what the status for Jetpack is…however, Jetpack tends to follow the most recent L4T release (R21.3) and CUDA (6.5). You’ll have to verify that.

In terms of the Jetson side, this should cause a fresh flash and erase all that was on Jetson before this. For your host I don’t know…but CUDA would be the same version.

Do you know if you can you cross compile with CUDA 6.5?

You can cross compile for 6.5 but it won’t run on R19.x L4T. It’ll run on R21.x.

I see. I decided to try a fresh start so I pulled out an unopened Jetson. I installed the latest Jetpack. Somehow I’m still having issues getting the examples to run.

I try to run MatrixMul and get:

ubuntu@tegra-ubuntu:~/NVIDIA_CUDA-6.5_Samples/bin/armv7l/linux/release/gnueabihf$ ./matrixMul
[Matrix Multiply Using CUDA] - Starting...
cudaGetDevice returned error code 35, line(396)
cudaGetDeviceProperties returned error code 35, line(409)
MatrixA(160,160), MatrixB(320,160)
cudaMalloc d_A returned error code 35, line(164)

Trying to run the scan example:

ubuntu@tegra-ubuntu:~/NVIDIA_CUDA-6.5_Samples/bin/armv7l/linux/release/gnueabihf$ ./scan./scan Starting...

CUDA error at ../../common/inc/helper_cuda.h:1033 code=35(cudaErrorInsufficientDriver) "cudaGetDeviceCount(&device_count)"

From a similar post on this forum, it seems like CUDA is not installed on the Jetson. I just don’t understand how that is remotely possible. Nvcc is installed, I was able to compile examples, so what is going on?

I have a folder called “NVIDIA-INSTALLER” in /home/ubuntu, but inside is a file named “Tegra124_Linux_R19.2.0_armhf.tbz2”.

A fresh install should have wiped out all older R19.x, especially the R19.2 it ships with. I suggest you grab R21.3. Jetpack might be using that by now, definitely R21.2 or higher.

I grabbed the latest Jetpack. Is it possible the Jetpack installer is not wiping/updating the L4T installation on the Jetson?

The Jetpack installer must have worked in some capacity… I obviously have CUDA 6.5 installed on my Jetson. Is it possible it didn’t update L4T?

I was reading over the Jetpack documentation again and found this:

    Target Platform Requirements:
  • Jetson TK1 Tegra Developer Kit, equipped with the NVIDIA Tegra TK1 processor
  • Developer system, cabled as follows:
  • Serial cable plugged into the serial port J1A2 UART4 on the target, connected to your Linux host directly or through a serial-to-USB converter. (This is needed to setup a serial console on the Linux host.)
  • USB Micro-B cable connecting Jetson TK1 (J1E1 USB0) to your Linux host for flashing
  • (Not included in the developer kit) To connect USB peripherals such as keyboard, mouse, and [optional] USB/Ethernet adapter (for network connection), a USB hub could be connected to the working USB port (J1C2 USB2) on the Jetson TK1 system.
  • An HDMI cable plugged into "J1C1 HDMI1" on the target, which is connected to an external HDMI display.
  • An Ethernet cable plugged into the J1D1 on-board Ethernet port.

Could my Jetpack installation be messed up because I didn’t have my system connected as above?

My host and Jetson are connected through a switch or right now they are both connected into my home router.

Jetson flash requires the micro-B USB cable to be connected between Jetson and host…this is the cable supplied with Jetson. The Jetson must have also been started in recovery mode by holding down the recovery button while powering up or cycling power. On your host you should see output from lsusb:

Bus 002 Device 012: ID 0955:7140 NVidia Corp.

(bus and device can differ, ID will be constant)

This is mandatory for flash. The existence of old directory content from the original shipped R19.2 guarantees the system has not been flashed since it came from the factory. Thus there is no possibility CUDA 6.5 will work on the device.