Back then, I made a slight change to the jetson-agx-orin-devkit-as-nx-16gb.conf file, which at that time allowed me to flash the DevKit and get it up and running in emulated mode as a 16 GB Orin. After using it that way for some time, I had some issues with the file system becoming extremely unresponsive at times, and also got filled up quickly, so I decided to upgrade to an SSD (with 1 TB of storage). I decided to use the latest version of L4T, which is 35.4.1.
VERSION=35.4.1 # or whatever is the latest
export L4T_RELEASE_PACKAGE=Jetson_Linux_R${VERSION}_aarch64.tbz2
export SAMPLE_FS_PACKAGE=Tegra_Linux_Sample-Root-Filesystem_R${VERSION}_aarch64.tbz2
export BOARD=jetson-agx-orin-devkit
cd /nvidia && \
tar xf ${L4T_RELEASE_PACKAGE} && \
cd Linux_for_Tegra/rootfs/ && \
sudo tar xpf ../../${SAMPLE_FS_PACKAGE} && \
cd .. && \
sudo apt update && \
sudo apt install -y qemu-user-static picocom && \
sudo ./apply_binaries.sh && \
sudo tools/l4t_flash_prerequisites.sh && \
sudo ./flash.sh jetson-agx-orin-devkit-as-nx-16gb nvme0n1
The flashing log indicates that everything went fine, but the system does not boot. And it doesn’t matter if I apply the one-line change to the .conf file, it doesn’t work either way.
As suggested in the other thread, I tried to connect a Micro USB-cable and get UART output. The output is included in this .log file: uart.log (84.2 KB)
I am at a bit of a loss here. Any help is appreciated.
The flashing log looks like this: flash_log.txt (105.5 KB)
While I’m not an expert in Nvidia devices, nothing in this made me doubt that the flashing was successful. In particular because it says The target t186ref has been flashed successfully.
That said, I will try this initrd flash thing and we’ll see how it goes. Thanks for the help!
So in your case, only the initrd image is flashed into the eMMC, which is not a fully functioning system and nothing more than the kernel image itself, and it’d expect to mount the rootfs at /dev/nvme0n1p1 (you should also use nvme0n1p1 instead of nvme0n1) upon boot, which does not exist so it hangs during booting.
I tried the l4t_initrd_flash.sh command you suggested, which fails with the following log: flash_initrd.log (243.2 KB)
The discussion about nvme0n1 vs nvme0n1p1 also made me realize that I haven’t made any effort to actually create that partition. The SSD was factory new when I connected it to the Jetson. I will make an attempt to verify that this partition exists, and creating it if necessary, before moving on. Unless the flashing scripts do that automatically?
I have now created the partition nvme0n1p1. nfs-kernel-server is installed, but I cannot run it because systemd is not the init system. I will update the docker commands to make sure nfs-kernel-server can run, and then post here again with results.
systemctl start nfs-server.service
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
Inside the container, I can verify that systemd is PID 1 by running top. However, running systemctl start nfs-server.service fails with the error:
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
This issue is only partially related to the Jetson, but I don’t see an obvious way around it. SDK Manager is officially distributed as a Docker image, but the Docker image does not support systemd services, yet some of the SDK Manager’s functionality depends on systemd services. My natural conclusion would be that the SDK Manager’s Docker image simply doesn’t support the use case of flashing to an SSD. If that were true, that would be a quite notable limitation that should be documented somewhere. So I am more likely to suspect that I’m doing something wrong.
I don’t have access to a computer with Ubuntu 20.04 at the moment, so installing SDK Manager “natively” (without Docker) is not an option. As far as I can tell (from https://developer.nvidia.com/sdk-manager), SDK Manager with JetPack support only works on Ubuntu 18.04 and 20.04.