“Standalone installers are not provided for the ARMv7 release. For both native ARMv7 as well as cross development, the toolkit must be installed using the distribution-specific installer. See the Cross-build Environment for ARM installation section for more details.”
and then the Cross-build Environment link says to dpkg a cuda-repo*.deb file, which I don’t know where to get (same thing as the l4t-installation above).
How am I supposed to install the drivers and libs on my Jetson TK1?
I suppose I could install a Cross-compiler version on my host, and copy the libs over to the Jetson, but there must be an easier way…
What did I miss? Maybe the drivers are installed, I just don’t know where?
Been a while since I’ve installed CUDA (I’ve forgotten many details), but first thing to know is that L4T does not come with CUDA…the L4T packages do not assume Jetson, and may be used on other platforms. For ARM (and Jetson) the latest CUDA version is 6…should you see version 6.5, ignore it.
I got the samples running on the Jetson, that’s great (although I can’t see a thing, even with export DISPLAY=:0).
Next steps are to:
exit the screensaver from the remote terminal, or get input devices to my Jetson
get the cross-compiler to work
rebuilt L4T kernel with NFS server support (the Getting Started guide for cross-compilation advises using TARGET_FS with the mounted target… which requires a kernel with NFS support. I find it that the default L4T does not have it).
So, to summarize, I downloaded the 6.0 CUDA toolkit, which was available for the Jetson. Once copied and installed on target, I can run the samples.
Now I am trying to setup my cross-compiler, and the download page (https://developer.nvidia.com/jetson-tk1-support) provides only a toolkit for Ubuntu12.04 x86 64-bit.
What if I have another host machine? I have Ubuntu14.04, I think it is not very exotic.
I use fedora as host…but I believe Ubuntu 12.04 x86 will work on Ubuntu 14 as well…I’m thinking they are compatible still (i.e., there has been no reason they needed to build a newer version). Cross compilers themselves can be confusing in naming, since packaging may indicate both what they run on and a different machine for what they build for…depending on package naming convention. Look for eabihf which indicates arm hard float convention and is used in all L4T except for boot loader.
If you use the linaro listed above, look for the left side package name with “gcc-linaro-arm-linux-gnueabihf-4.9”. This tells you provider linaro, compiling for ARM architecture that will run on Linux using the arm hard float convention, version 4.9 of the compiler. Next you’d look for a recent date for their release, mine is from July, and on the right look for the name of the host architecture it will run on…for me I picked up the “linux” version using the xz compression (uses the 7z decompression program, bz2 would be an alternative, but it is a large download and xz was smaller).
But…you don’t need any of that really since you have a full operating system on Jetson. You can just compile kernels directly on it. I happen to have another embedded Tegra/ARMv7 system which does not have that ability, so I have to be able to cross compile anyway. Should you want a copy of files from Jetson for a cross compile environment on your host, and assuming you’ve flashed Jetson once, you’ll have a file in your flash bootloader directory called system.img. You can loopback mount system.img as ext4 and it is an exact copy of your entire Jetson at the moment you flashed. You could use rsync to update this as well.
Thanks, that’s what I thought (that there was a chance the 12.04 package would work in 14.04).
No bare metal, I upgraded to the grinch 19.3.6 (for my wireless USB mouse and keyboard to work, lucky us that @Santyago makes this).
I really would like to get the cross-compiling working. It’s not only to rebuild kernels, I’m going to develop applications and would really want to keep the comfort and speed of my workstation.
Interesting note about system.img, I didn’t know!
Right now I am considering recompiling the kernel to enable NFS server, and use TARGET_FS on my host to build directly with the target’s CUDA libs.
What you’re saying is that I could mount my local system.img and use that instead? I’d have to make an image that contains the CUDA libs, which I for now install manually on target after flashing.
It feels a bit safer to mount the Jetson on my host, and use the libs there, then I know I am using the right libs. The “Getting Started” guide seems to agree. Although it’s a bit strange that you need to rebuild a kernel to follow the Getting Started guide (I hear you about security issues with having NFS enabled by default).