Jetpack 3.0 TX1 CUDA runtime missing?

I have installed Jetpack 3.0 on several Jetson TX1 boards, I can compile the code that requires CUDA, but I cannot run it because there are no drivers and CUDA runtime. With the previous version of JetPack there were no issues of this sort.

Hi,

Just check JetPack3.0, cuda runtime libraries can be found under “/usr/local/cuda-8.0/lib64/”.

ubuntu@tegra-ubuntu:~$ ll /usr/local/cuda-8.0/lib64/libcudart*
lrwxrwxrwx 1 root root 16 Jul 16 2016 /usr/local/cuda-8.0/lib64/libcudart.so -> libcudart.so.8.0
lrwxrwxrwx 1 root root 19 Jul 16 2016 /usr/local/cuda-8.0/lib64/libcudart.so.8.0 -> libcudart.so.8.0.34
-rw-r–r-- 1 root root 342848 Jul 16 2016 /usr/local/cuda-8.0/lib64/libcudart.so.8.0.34
-rw-r–r-- 1 root root 764424 Jul 16 2016 /usr/local/cuda-8.0/lib64/libcudart_static.a

Could you re-install via JetPack and try it again?
Thanks.

Yes, I checked it and there they are. I tried reinstalling several times.

Hi,

Please attach all the logs located at ‘_installer/logs/64_TX1/’ for us debugging.

Thanks.

Hi stereoIV,

Have you managed to resolve this problem?
Or please attach all the logs located at ‘_installer/logs/64_TX1/’ for us debugging.

Thanks

If you can successfully compile and link, then the libraries are there.
For execution, it might be a dynamic libraries path not found issue.
Does it goes better if you add cuda libs path to environment variable LD_LIBRARY_PATH ?

export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64:$LD_LIBRARY_PATH

You can check the dynamic libs dependencies of your app with:

ldd your_app

You may post the log of the error, it would help to find it out.

Hello,

update from stereoIV colleague: the problem occurs when code is compiled with TX2 and ran on TX1. Compiled source code on TX2 works on TX2 and compiled code on TX1 works on TX1. The problem is that same compiled code cannot be used with different TX versions. Any solutions how to compile source code to make it work on TX1 and TX2 with same build?

Hi,

They have different GPU architecture
TX1: sm_53
TX2: sm_62

We are aware of it. In the cmake file we have this included:

set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS};-gencode arch=compute_53,code=sm_53 -gencode arch=compute_62,code=sm_62)

Hi,

Just check a cuda sample compiled with the following options on TX1 can be executed on TX2 without error.

nvcc topic_1011293.cu --gencode arch=compute_53,arch=compute_62,code=compute_53,code=compute_62 -o test

Thanks.

I want to do the opposite: compile on TX2 and execute on TX1.

Hi,

We are checking this issue.
Will update information to you later.

Thanks.

Hi,

Sorry for keeping you waiting.

This issue is caused by new CUDA driver contains some not supported configuration for old driver.
TX1- CUDA Driver Version / Runtime Version 8.0 / 8.0
TX2- CUDA Driver Version / Runtime Version 8.5 / 8.0

So there are two solutions:

  1. Compile on TX1, which using older driver. And the program can run on both TX1 and TX2.
  2. Wait for our JetPack3.1 release. We align CUDA driver version in JetPack3.1 and this issue won’t happen again.

Thanks.

Thanks for information. Is there any estimate when Jetpack3.1 will be released?

Hi,

We can’t disclose our detail schedule.
Please wait for our announcement and update.

Sorry for the inconvenience.