CUDA missing on jetson-tk1

I flashed my Jetson-TK1 according to this:
and got the unity desktop running.

However there is no CUDA in /usr/local, or in /lib. Shouldn’t the drivers and libs be there after flashing?

Since I didn’t find them, I assumed I had to install them myself on the Jetson. The downloads available at do not really help me. For Linux ARM, there is:

  • Ubuntu 14.04 with the note “NOT to be used for L4T (Jetson TK1)”
  • Cross-compiler versions
  • 64 bit versions says something about
$ sudo dpkg -i cuda-repo-.deb
but it is still not clear where to get that file.
“Standalone installers are not provided for the ARMv7 release. For both native ARMv7 as well as cross development, the toolkit must be installed using the distribution-specific installer. See the Cross-build Environment for ARM installation section for more details.”
and then the Cross-build Environment link says to dpkg a cuda-repo*.deb file, which I don’t know where to get (same thing as the l4t-installation above).

How am I supposed to install the drivers and libs on my Jetson TK1?
I suppose I could install a Cross-compiler version on my host, and copy the libs over to the Jetson, but there must be an easier way…

What did I miss? Maybe the drivers are installed, I just don’t know where?

Been a while since I’ve installed CUDA (I’ve forgotten many details), but first thing to know is that L4T does not come with CUDA…the L4T packages do not assume Jetson, and may be used on other platforms. For ARM (and Jetson) the latest CUDA version is 6…should you see version 6.5, ignore it.

This URL may help, as you have to start by adding the CUDA repo to apt:

More specifically:

Once you have the repository for CUDA, apt-get should do as required.

There are a couple of ways to do it, an article on the Jetson Wiki explains:

I created a bit of a howto from starting of re-flashing, to installing cuda on the jetson here

I see, that’s why I couldn’t find a .deb for the Jetson at the Cuda 6.5 download page! They could have put a link to where to go, instead of just writing “not for the Jetson TK1”.

It helped, but I found the elinux guide by Kangalow to be best (although incomplete). Thanks Kangalow.

Tristan, thanks for the guide. You seem to use cuda-repo-ubuntu1304_6.0-37_armhf.deb instead of cuda-repo-l4t-r19.2_6.0-42_armhf.deb as per Any reason for that?

I got the samples running on the Jetson, that’s great (although I can’t see a thing, even with export DISPLAY=:0).
Next steps are to:

  • exit the screensaver from the remote terminal, or get input devices to my Jetson
  • get the cross-compiler to work
  • rebuilt L4T kernel with NFS server support (the Getting Started guide for cross-compilation advises using TARGET_FS with the mounted target… which requires a kernel with NFS support. I find it that the default L4T does not have it).

Thanks all!

So, to summarize, I downloaded the 6.0 CUDA toolkit, which was available for the Jetson. Once copied and installed on target, I can run the samples.
Now I am trying to setup my cross-compiler, and the download page ( provides only a toolkit for Ubuntu12.04 x86 64-bit.

What if I have another host machine? I have Ubuntu14.04, I think it is not very exotic.

The Getting Started 5.1.3 guide ( says:

I will try with the 12.04 package, but I am confused by the absence of a 14.04 package for cross-compiling to the Jetson-TK1.

I use fedora as host…but I believe Ubuntu 12.04 x86 will work on Ubuntu 14 as well…I’m thinking they are compatible still (i.e., there has been no reason they needed to build a newer version). Cross compilers themselves can be confusing in naming, since packaging may indicate both what they run on and a different machine for what they build for…depending on package naming convention. Look for eabihf which indicates arm hard float convention and is used in all L4T except for boot loader.

Assuming you are not running on bare metal and are using the L4T Jetson comes installed with, and if your packaged version mentioned above for Ubuntu12.04 does not do the job, I’d recommend the gcc-linaro-arm-linux-gnueabihf cross compiler. I use version 4.9-20140.07 running on x86_64 linux/fedora.

If you use the linaro listed above, look for the left side package name with “gcc-linaro-arm-linux-gnueabihf-4.9”. This tells you provider linaro, compiling for ARM architecture that will run on Linux using the arm hard float convention, version 4.9 of the compiler. Next you’d look for a recent date for their release, mine is from July, and on the right look for the name of the host architecture it will run on…for me I picked up the “linux” version using the xz compression (uses the 7z decompression program, bz2 would be an alternative, but it is a large download and xz was smaller).

But…you don’t need any of that really since you have a full operating system on Jetson. You can just compile kernels directly on it. I happen to have another embedded Tegra/ARMv7 system which does not have that ability, so I have to be able to cross compile anyway. Should you want a copy of files from Jetson for a cross compile environment on your host, and assuming you’ve flashed Jetson once, you’ll have a file in your flash bootloader directory called system.img. You can loopback mount system.img as ext4 and it is an exact copy of your entire Jetson at the moment you flashed. You could use rsync to update this as well.

Thanks, that’s what I thought (that there was a chance the 12.04 package would work in 14.04).

No bare metal, I upgraded to the grinch 19.3.6 (for my wireless USB mouse and keyboard to work, lucky us that @Santyago makes this).

I really would like to get the cross-compiling working. It’s not only to rebuild kernels, I’m going to develop applications and would really want to keep the comfort and speed of my workstation.

Interesting note about system.img, I didn’t know!
Right now I am considering recompiling the kernel to enable NFS server, and use TARGET_FS on my host to build directly with the target’s CUDA libs.
What you’re saying is that I could mount my local system.img and use that instead? I’d have to make an image that contains the CUDA libs, which I for now install manually on target after flashing.
It feels a bit safer to mount the Jetson on my host, and use the libs there, then I know I am using the right libs. The “Getting Started” guide seems to agree. Although it’s a bit strange that you need to rebuild a kernel to follow the Getting Started guide (I hear you about security issues with having NFS enabled by default).

Anyway, I am trying to get the cross-compiling working for Jetson, and running into troubles there too. I am starting a new thread (since it starts getting off topic for this one), but I admit I hope to see you there :)