Hello,
I just bought a nano computer running ubuntu 1804. I’ve not been able to find the appropriate sdk manager to run on the nano (jetpack 4.8 is installed).
After a bit of reading it seems there are a few restrictions I need to work with:
- Do not work on the jetson nano computer due to hardware limitations (which is not an option anyway as I can’t find the approprate sdk manager to run anyway).
- While recommended, the use of a host computer is restricted by use of the exact same ubuntu version as the nano. This is an issue for me because my ubuntu pc is 2204 and I don’t want to sacrifice all of its uses by downgrading it to 1804 just to work with the jetpack.
- A VM is not advisable for various reasons (I see Nvidia posting this in several threads).
Is all of this actually true? Does anyone have advice about overcoming these obstacles?
More details: I have an Ubuntu 2204 machine, a macbook pro on M2 silicon, and the jetson nano.
Any advice is appreciated.
Tom
For reference:
- L4T is just Ubuntu with NVIDIA drivers. You can find the L4T release via “
head -n 1 /etc/nv_tegra_release
”.
- JetPack is just a front end to the flash software running on the host PC. The actual flash is performed by the “driver package” (a recovery mode Jetson is just a custom USB device understood by that driver).
- SDK Manager is the smart network layer on top of JetPack.
- JetPack 4.x is designed to install/flash L4T R32.x, which is what you mentioned, Ubuntu 18.04.
- You are correct that JetPack 4.x won’t run on a newer Ubuntu. The driver package can be used to flash basics on command line without the host being Ubuntu or Ubuntu 18.04. Command line has fewer restrictions, but it is JetPack which installs the “optional” software. After a flash the Jetson automatically reboots, and becomes fully booted instead of being in recovery mode. At that point JetPack installs the “goodies” (such as CUDA) over
ssh
to the fully booted Jetson. There are ways one can use the apt-get
tool to install those if the NVIDIA repositories are set up (and likely they are), and if you know the name of the package(s) (this is the hard part; JetPack already knows these, so adding those packages doesn’t require personally knowing the names of packages).
- The boot chain of Jetsons are that of an embedded system, not the same as a desktop PC. This means that in this particular hardware and boot setup the “usual” GRUB is not used, and it is boot content (for the most part) which prevents more mainstream software from working for boot. You are indeed limited to L4T R32.x (Ubuntu 18.04) as the latest Ubuntu to operate on the older Nano.
- You can find a list of URLs for content related to a given L4T release here:
https://developer.nvidia.com/linux-tegra
- The Nano reached end of life, and so new features are not added to this. There won’t be more releases in the compatible L4T R32.x/JetPack 4.x (I suppose it is possible NVIDIA might add a security fix, but unless something unusual happens, I doubt there will be any more updates for this major release).
- The Xavier Nano works up to L4T R35.x (flashed with JetPack 5.x), which is Ubuntu 20.04. This too I think might be end of new features, although it might get security updates (I don’t know for sure).
- The Orin Nano is actually supported and developed for new features. This can use Ubuntu 22.04 (which is L4T R36.x, and flashed by JetPack 6.x).
- Orin Nano under L4T R36.x uses the mainline kernel. A UEFI boot chain is fully implemented, and thus mostly this boot chain can use more non-custom boot software once it gets to the start of the UEFI itself.
- A big reason for the boot chain being custom is that Jetsons don’t have a true BIOS. A desktop PC has this in a combination of extra hardware and firmware. While a PC would set up power rails and clocks and enable parts of the system using this extra hardware, a Jetson does this entirely in software/firmware. That means you don’t have that standardized BIOS function for the boot chain to continue from. UEFI though has an abstraction phase, and once that is loaded, then the rest of the UEFI boot is standard (older non-UEFI of Xavier and older does not have this; parts which might exist for Xavier are incomplete for UEFI). By doing this a lower power requirement exists, and the physical size is reduced, which is likely why this approach is used.
- You would have to go to Orin Nano to get Ubuntu 22.04 (L4T R36.x), but you would also get a mainline kernel which doesn’t require NVIDIA modifications for drivers to work.
- Technically, the Orin Nano only comes with Ubuntu 22.04 (in the form of L4T R36.x), but there is support for those who want to use some other distribution (it would be a lot of work, but since so much is “standard” and not custom for this release such an approach is practical for those with patience). There isn’t any approach for this in the older Nano on L4T R32.x, and that older Nano cannot exceed L4T R32.x.
I don’t know when, but likely there is a new hardware release beyond Orin which would be getting ready for release. If that is the case (there has to be something brewing, but I don’t know the name or timing), then it is highly likely that next generation will start with L4T R36.x, and then shortly after release, a new R38.x might come out (this is entirely speculation). R38.x would be Ubuntu 24.04 if the patterns of release continue. Maybe Orin would stop new features at the end of L4T R36.x (I’m guessing though that R36.x will be actively developed for at least another year after the end of this year).
The reason VMs are not recommended are many. Here is a subset of that:
- Each VM has different configuration setup. It is not something NVIDIA software can help with, and so the end user would need to become “fluent” in their specific brand of VM.
- During flash the custom USB Jetson (that’s what recovery mode is) will disconnect and reconnect on the USB line. Many VMs will lose the Jetson and not reacquire the USB after this. It is up to the end user to figure out how to cause the USB to always be reacquired after loss.
- The VM’s kernel has to support loopback. One might have a VM which is fully Ubuntu 18.04, but consider that the installed kernels have a lot (I should emphasize this even more) of options. I know for quite some time the kernel in the Windows WSL2 did not have loopback, and so it could not work unless the user replaced the kernel with a custom built kernel inside of the Windows emulator. That’s a lot of understanding required, and a lot of work. Far more work than having a second hard drive and dual booting. Maybe the more recent WSL2 supports loopback without extra work, but it is up to the end user to answer that.
- The filesystem type of the VM must be
ext4
. Attempting to do this on the Windows filesystem under NTFS or VFAT will only “appear” to work. Then the flashed Jetson will fail a large part of its functionality. So you need ext4
filesystem type anyway.
- VMs can be made to work. I think there is even some official documentation mentioning WSL2 (original WSL was rather hopeless; later patches of WSL2 likely might have some success, along with several other brands of VM if you know how to set it up).
- Networking setup is also required with similar issues: After the Jetson flashes and automatically reboots to install the optional software one needs the VM to pass through either the virtual wired ethernet on the USB cable used for the flash, or else it needs to pass through an actual wired ethernet without the VM or host interfering.
Now consider an option: If you add a second hard drive to the PC, then you can install Ubuntu 18.04 (or whatever you require) on that while leaving alone your other Linux and any other Windows. You’d just pick which one to boot and none of those other learning curves are needed. Your Ubuntu 22.04 could even read and access the Ubuntu 18.04, as well as the other way around. Overall though, you only need this during flash. After that you only need the Ubuntu 18.04 if you are running particular developer tools which require Ubuntu 18.04.
1 Like
Wow! Thank you for the thorough description! I really appreciate you taking the time to explain all this critical info. Your post should be on their website as it’s much more helpful than the diagrams they have up.
So knowing all this I’ve decided to come back an buy upgraded hardware. Here’s why:
My goal is to write some Cuda kernels over the holiday and get comfortable coding to Nvidia’s hardware. I can do that just as easily with an Nvidia GPU on my Linux rack server (PCI card).
The Nano I purchased was from Seeed and boots from an internal storage (not SD card),. Even though it was using JP 4.8 I was having trouble booting from USB and when I tried to partition into the USB drive I caused an error in the OS. For some reason I was unable to get the Seeed board to boot from the SD card or USB. The last option for me was to wire the jumpers to go into recovery mode. I was unsure as to how to do this on my board and, honestly, it felt like I was doing open-heart surgery on a gerbil. The age of the nano made me question if the juice was worth the squeeze.
I decided it was not worth it, and because I bought this from Amazon, I was able to return it and just get an Nvidia PCI card to do some coding for the time being.
I do have projects that require an Orin (AI vision) or more advanced board. When the time comes in the spring, I’ll go ahead and buy a newer, more useful piece of hardware to keep moving forward.
Thanks again for all this info; I’ll refer back to this when the time comes.
THT
Be sure to note that JetPack allows checking and unchecking of various options, but it does not look obvious. One might look at JetPack and think you must flash when using it. You can in fact uncheck flash and do things like install CUDA over ssh
(no recovery mode) to just the Jetson or to just the host PC if that is what you have checked. The releases of CUDA on Orin are newer, but the version used is for the integrated GPU (iGPU) which is attached directly to the memory controller, and does not work with PCIe GPU detection schemes (nvidia-smi
program).
However, JetPack 6.x (which is L4T R36.x; Orin supports this) does have a version of nvidia-smi
now which is specialized to work with the iGPU (desktop PCs use a discrete GPU, dGPU, on a PCI bus). Some of the details for this iGPU version of nvidia-smi
won’t be at the same level as the dGPU version of that software. The lesson though is to make sure you examine the sample code and software provided by JetPack, otherwise you might find some driver or tool of the usual software “out in the wild” does not work on a Jetson (that software could work, but only if adapted to the iGPU; the other software out there has basically the same function and use as on the Jetson, so it is the iGPU/dGPU details which is where they differ, and it is the JetPack installed version which was adapted to iGPU).
You won’t find nvidia-smi
is of any use for JetPack 5.x (L4T R35.x) or earlier. Thus, one must use Orin or newer (I expect a new release of hardware sometime in the near future, but I don’t know exactly when).