Jetpack 2.3.1 CUDA unmet dependencies

Hello everyone

I am new to the Jetson Tx1 GPU and have been trying to install the new jetpack but facing few obstacles.
Kindly let me know what am I am going through here.

I downloaded the new jetpack and run it in the terminal, but during installation I get a error saying Installing CUDA toolkit for ubuntu 14.04 8.0.34 failed.

When I check the cuda_host_tx1.log it says:

The following packages have unmet dependencies:
cuda-toolkit-8-0: Depends: cuda-samples-8.0(>= 8.0.34)but it is not going to be installed
cuda-toolkit-8-0: Depends: cuda-documentation-8.0(>= 8.0.34)but it is not going to be installed

E:unable to correct problems,you have held broken packages.

I tried to reload the broken packages by sudo apt-get install -f and retry installing jetpack but still face same error.

Kindly advice.

Was the install to a desktop x86_64 host machine? This is where JetPack actually installs…and you’d need the nVidia graphics card and driver before CUDA would be able to install. More information on the machine being installed to might help.

Are you running Ubuntu 14.04 on your desktop? I got these same errors when trying to install on 15.10 and 16.04 desktops with earlier Jetpack releases.

Hello @linuxdev and @sperok

The host machine x86_64 ubuntu 14.04 LTS.
I had no problems previously when I had installed the Jetpack 2.0. I had exactly followed the video from JetsonHacks online.

But currently I get the above mentioned errors when installing the latest Jetpack 2.3

I checked for the drivers with the following and got the following:

§ glxinfo|egrep “OpenGL vendor|OpenGL renderer*”

OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel® Sandybridge Mobile*

My question is if I was successful with jetpack 2.0 on the same host machine , why am I facing the errors now?

@sperok : were you successful in installing same on 14.04 ubuntu?

There’s the problem…CUDA (and CUDA-dependent packages) require the nVidia driver (which only works with nVidia graphics cards). It looks like you’re using an Intel graphics card and not an nVidia graphics card. What is the graphics card info on the host?

Note that a Jetson can be flashed and get various packages added to it even if the host can’t handle the CUDA packages. I suspect any previous install with that particular host to a Jetson did not install CUDA to the host unless the host had a different video card at the time.

I have the cuda installed on Jetson and also have a f-rcnn source code working.
But the program runs only for few minutes and later displays a stackflow error.

Is it possible to code directly on the jetson without the host?

I have not looked in quite some time to see if nvcc is available on the Jetson itself. I have a Fedora host and currently JetPack is the only way to install CUDA on a JTX1 (and JetPack requires an Ubuntu 14 host). Perhaps someone who has used JetPack for a recent install can comment on whether nvcc can be installed to the Jetson itself. In the case of other programming (C/C++, so on) it is easy to develop on the Jetson.

I have a frcnn model running on jetson but it only runs for 30seconds and throws an error saying
Check failed:error==cudaSuccess(4 vs 0) unspecified launch failure
check failure stack trace

Any advice on this?

Compile with debug symbols and run in gdb. You should be able to get a stack frame via the “bt” (backtrace) command. From there you may be able to see specific parameters and function for the failure.


I run my program in gdb and got an error
check failure stack trace
Program received signal SIGABRT, aborted
at …/ports/sysdeps/unix/linux/libc-do-syscall.S.44: No such file or directory

later I tried to backtrace it


Python exception <class ‘gdb.MemoryError’> cannot access memory at 0xfffee59c

Any advice for this?

I’m not very good with Python debugging, but if this were C/C++, I’d think the code has an abort() function call in it. Is there some sort of abort() call in the Python code itself?

The “No such file or directory” would mean you don’t have debug symbols available, which is not at all surprising on something so low level. Seems like that is probably part of a cross compile tool chain or sysroot. The “strace” command could actually provide some information, but the amount of output would be huge, and I’m not sure if you’d have to do something special to get strace under Python or not. If this were just a binary you’d do something like this:

strace -oTraceLog.txt <progname>
tail -n 200 TraceLog.txt
# ...examine log for "abort" or "abrt"...

If it turns out that the abort was via code in the program, then you could examine what possibly calls this abort. There is also a strong chance the call is from the Python interpreter itself, in which case the issue is some standard memory availability check.

Assuming it is a memory issue, you might want to run “htop” and watch memory use as things go along (and this is the simplest thing to test). Perhaps it is as simple as needing more memory (in this case it would mean adding swap to work around limited physical RAM).