Flashing Orin Developer Kit without a native Ubuntu host

Good morning (or afternoon, if you’re here in the UK)

I have just bought and have started using an Orin Developer Kit, which I’m delighted with. However, I have one difficulty and would appreciate advice on how to solve it.

My current setup (which I also used with a Xavier Developer Kit a couple of years ago) has an Ubuntu host system running in a VMWare fusion virtual machine on my Mac Pro. I do not have, and will not have, a PC running Ubuntu natively – only these VM-based versions. (Which seems fine for cross-compilation.)

So I am concerned that I have no way to reflash the Orin, should that prove to be necessary. I set it up using the steps listed on the “Getting Started” web page, which worked fine. I then upgraded it to Jetpack v5.0.2 using the “Package Management Tool”, using the steps described here.

This gives me nvidia-jetpack version 5.0.2-b231.

/etc/nv_tegra_release shows R35 revision 1.0 (this file only has one line, incidentally).

Checking dpkg for nvidia-l4t-core gives 35.1.0-20220825113828.

I assume that this l4t-core is simply what the device was flashed with when manufactured?

My main questions are:

  1. How much of a problem is it (or will it become) if I am unable to flash from a native (non-VM) host?
  2. Is there any alternative to using a native host?
  3. What is the difference between flashing from a host system and simply updating packages using the Package Management Tool?
  4. [If there is an alternative way to reflash] Will reflashing wipe all of the setup I have now done with my CUDA-based application (this takes me quite a while to do!).

Thanks …
Andrew

Flashing from a native PC is always recommended. Sometimes software can be upgraded/updated without flashing, but often flash will be necessary. A VM is not officially supported, but if you can manage to force USB passthrough to always reach the VM, and if the VM has sufficient ext4 filesystem available, then it can work.

And yes, flash erases everything. You can clone though, and a loopback mounted clone can be used to copy your work from it to the new system (there are a lot of variations on this). A clone will exceed the size of the partition though. Let’s say that you flash and this creates a 28 GB partition; then you need this much for the flash (actually more), plus the clone copy will double that. I think they say you need 50 GB of free space to start a flash, but you will need even more in many cases. If you exclude space consumed by the VM itself, and the flash software, and if you plan on cloning, then you might consider something like 125 GB or 150 GB of ext4 spare disk space before you start.

1 Like

That’s really helpful. Thank you for replying so quickly (on a Saturday, too).

One of my main concerns about trying to reflash from a VM-based host is the risk of using USB passthrough. So I can believe that it might work. But I’m concerned that, if the USB connection could not be maintained, the Orin could be left in a poor state (possibly even bricked).

So, if I might ask a few more questions:

  1. If I attempt to reflash from a VM-based host, is there a risk that the Orin could be left unusable?
  2. When is it necessary to reflash, rather than simply upgrade packages? (I seem to have managed to upgrade to Jetpack v5.0.2 okay, so far as I can tell, although the Power GUI is not working.)
  3. Would it be possible to use a Raspberry Pi v4 running Ubuntu as the host? (I have a couple that I use as VPN and DNS servers that I could use.)

Thanks again for your help.

Andrew

Flash will never “brick” a Jetson, even when it fails or crashes miserably in the middle of flash. “Bricking” tends to mean it is no longer able to ever flash again. For example, a desktop PC has a BIOS which must work to flash, and if one were to flash the BIOS, but have flash fail (and not have a motherboard model with a backup BIOS), then the board is bricked whereby the only way to fix it is to unsolder the BIOS memory, flash it outside of the motherboard, and then solder it back to the motherboard. Jetsons don’t have a BIOS. There is a possibility that if someone is using low level commands to manipulate some unusual internal i2c addresses perhaps the Jetson would be bricked, but one would have to try hard to do this, and flashing would not be the cause.

However, a working Jetson, if it is in a failed flash, won’t boot again until it is flashed.

A failed flash from a VM would probably mean the Jetson doesn’t work again until you flash it again. FYI, one interesting “test” which means likely flash would work is if in the VM you can successfully clone. Cloning also works in recovery mode, but is read-only. I’m not certain though what the current state is for cloning of an Orin. Someone from NVIDIA would need to reply if all of the clone and restore operations work with the latest version (I know the developer preview had issues with this). The issue was related to a need for the partition size to be an even multiple of 4096 (other Jetsons needed to be an even multiple of 1024 bytes, so this was new, and perhaps because of larger eMMC in the Orin).

So far as upgrading packages goes, it depends on the package. Many parts of this are just simple Ubuntu and not related to the specific hardware. All of those parts tend to be no different than an Ubuntu PC for update. However, the parts related to the GPU or boot content tend to require NVIDIA versions. Especially with regard to the GPU there are libraries related to CUDA or GPU-accelerated functions, and those libraries tend to be version locked. When version locked there might be minor releases or patch releases which can be upgraded with a simple Ubuntu apt command, but major releases tend to be tied to flashing with a new L4T release. The reason for this is that the GPU is integrated directly to the memory controller (an iGPU), whereas the regular software one would expect to use from apt tends to require PCI versions of a discrete GPU (dGPU). Mainline CUDA and other GPU-specific software tends to require PCI mechanisms for detection and configuration. As an example, one would not be able to dual install a version 10.x and 11.x CUDA on a Jetson, but you can on a PC. One cannot migrate from version 10.x on a Jetson to 11.x with a simple apt command, and flashing tends to be required.

There was a recent video by conference which I missed regarding some sort of evolution or update of the Jetson software update mechanism (something saying this was being simplified). I wonder if anyone from NVIDIA has a URL to that? I don’t recall the title, but it is something probably useful.

The RPi can read or access a Jetson’s serial console. What it cannot do is be used for flash. The flash software uses PC architecture software which is not open source. This means it must run on an x86 host PC if you intend to flash. There is no reason an RPi couldn’t be used for functions like DNS or a VPN since this are generic protocol-based services and don’t care that the other end on the network is any particular architecture.

1 Like

That is incredibly helpful. Thank you so much for making such a comprehensive and clear reply. Much appreciated.

What you say all makes complete sense. “Brick” was clearly the wrong verb, but I am nervous that, if I attempt to re-flash using a VM-based Ubuntu host, it risks leaving my Orin in the failed flash state that you describe. I appreciate that it might work, but …

So I am coming to the conclusion that, if I want to be able to re-flash (which I do, unless somebody from Nvidia indicates that there are plans for an update mechanism that does not require host-driven flashing), it would be safer to run the Ubuntu host natively. To do that, I have two options: obtain a cheap mini-PC (which I would rather not have to spend money on) or try to boot Ubuntu from an external SSD attached to one of my Macs (none of them have enough free space to create a partition on the internal SSD).

I have three Macs that I could choose from: a 2013 (“trash can”) Mac Pro; a 2012 “retina” MacBook Pro (but its discrete GPU is cooked); or a newer 2019 16" MacBook Pro. My inclination is to try the 16" MBP, although its security chip complicates matters slightly.

But if you or anybody else viewing this thread can offer any alternative suggestions, I’d be most grateful!

Thanks again for the invaluable assistance.
Andrew

PS. I know that “brick” is a noun, not a verb but, as one of my American friends used to say, there ain’t no noun that can’t be verbed … :-)

Multi-boot is often used, although normally it would be with an internal partition.

I do recommend that you use the more recent JetPack/SDK Manager, and so you’d want to run Ubuntu 20.04 LTS. If you want to use older releases, then you’d load Ubuntu 18.04 instead.

On the MacBook with the dead dGPU, does it run in any GUI? Does it work with command line? There are limitations to installing and flashing on command line, but it does work. A closer explanation of flash software might help…

Originally there was only command line flash. The Jetson becomes a custom USB device in recovery mode, and for this there is a “driver package” to work with this custom recovery mode USB device. The flash software is somewhat agnostic of what gets flashed, but the second package is the “sample rootfs”. This is pure Ubuntu (in the older releases it is Ubuntu 18.04 or earlier). Then one runs the “sudo apply_binaries.sh” command to overlay the NVIDIA drivers into the “rootfs/” content, which is when it is no longer pure Ubuntu (and becomes known as “Linux for Tegra”, or “L4T”).

During a normal flash the “rootfs/” content almost determines the exact image to flash. The command line arguments to “flash.sh” do result in a copy of target-relevant kernel and device tree and extlinux.conf content to some degree. Then it creates an exact image of the APP (rootfs) partition. “Standard” binary images are flashed to the non-rootfs partitions, and these never change within a given release unless the end user has done some sort of boot stage customization. The generated image (based mostly on “rootfs/”) gets flashed. At this stage there are no “optional” packages installed from a command line flash.

Optional packages, e.g., CUDA and sample programs, only get added (when using the GUI installer, JetPack/SDKM, which runs on top of the driver package as a front end) via ethernet (ssh/scp and commands) of a fully booted system (after flash the system automatically reboots). If you command line flash, and you have the knowledge, you could then use the “apt” mechanism to manually install content. If you had used JetPack/SDKM, then this is done for you (including setting up the rootfs with apply_binaries.sh and downloading optional content).

Technically, if you were willing to work more, then you could install Ubuntu 20.04 on the MacBook Pro (I think this is x86_64/amd64 PC architecture CPU), then you could possibly work in command line mode. However, any of your Macs which have the ability to load Ubuntu 20.04, and have the PC amd64/x86_64 CPU, could do this with the GUI. You’d just need enough hard disk space. They only need an NVIDIA graphics card if they are to install and run CUDA examples on the PC/Mac (people often fail to realize that most of the install step content can be “unchecked”).

I don’t have any Mac experience, so I don’t know what details or roadblocks you’d run into trying to load Ubuntu 20.04, but I know there are many people who have done so.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.