AGX Orin DevKit does not boot after flashing in emulation mode as Orin 16 GB

This is basically the same question as I asked back in this thread: How to flash AGX Orin DevKit with emulation into a bootable system?

Back then, I made a slight change to the jetson-agx-orin-devkit-as-nx-16gb.conf file, which at that time allowed me to flash the DevKit and get it up and running in emulated mode as a 16 GB Orin. After using it that way for some time, I had some issues with the file system becoming extremely unresponsive at times, and also got filled up quickly, so I decided to upgrade to an SSD (with 1 TB of storage). I decided to use the latest version of L4T, which is 35.4.1.

All in all, these are the commands I used:

sudo docker run -it \
--privileged -v /dev/bus/usb:/dev/bus/usb/ -v /dev:/dev -v /media/$USER:/media/nvidia:slave \
-v $HOME/nvidia/download:/download \
-v $HOME/nvidia:/nvidia \
--name nvidiasdkmanager --network host \
--entrypoint /bin/bash \
sdkmanager

Then, inside the container,

VERSION=35.4.1 # or whatever is the latest 
export L4T_RELEASE_PACKAGE=Jetson_Linux_R${VERSION}_aarch64.tbz2
export SAMPLE_FS_PACKAGE=Tegra_Linux_Sample-Root-Filesystem_R${VERSION}_aarch64.tbz2
export BOARD=jetson-agx-orin-devkit

cd /nvidia && \
tar xf ${L4T_RELEASE_PACKAGE} && \
cd Linux_for_Tegra/rootfs/ && \
sudo tar xpf ../../${SAMPLE_FS_PACKAGE} && \
cd .. && \
sudo apt update && \
sudo apt install -y qemu-user-static picocom && \
sudo ./apply_binaries.sh && \
sudo tools/l4t_flash_prerequisites.sh && \
sudo ./flash.sh jetson-agx-orin-devkit-as-nx-16gb nvme0n1

The flashing log indicates that everything went fine, but the system does not boot. And it doesn’t matter if I apply the one-line change to the .conf file, it doesn’t work either way.

As suggested in the other thread, I tried to connect a Micro USB-cable and get UART output. The output is included in this .log file:
uart.log (84.2 KB)

I am at a bit of a loss here. Any help is appreciated.

Hi,

flash.sh does not support flashing NVMe drives, and I don’t think

this is true. You should use initrd flash instead, and please try running like this:

sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device nvme0n1p1 -c ./tools/kernel_flash/flash_l4t_external.xml --showlogs --network usb0 jetson-agx-orin-devkit-as-nx-16gb nvme0n1p1

The flashing log looks like this:
flash_log.txt (105.5 KB)
While I’m not an expert in Nvidia devices, nothing in this made me doubt that the flashing was successful. In particular because it says
The target t186ref has been flashed successfully.

That said, I will try this initrd flash thing and we’ll see how it goes. Thanks for the help!

I’ll explain more here.

So in your case, only the initrd image is flashed into the eMMC, which is not a fully functioning system and nothing more than the kernel image itself, and it’d expect to mount the rootfs at /dev/nvme0n1p1 (you should also use nvme0n1p1 instead of nvme0n1) upon boot, which does not exist so it hangs during booting.

I tried the l4t_initrd_flash.sh command you suggested, which fails with the following log:
flash_initrd.log (243.2 KB)

The discussion about nvme0n1 vs nvme0n1p1 also made me realize that I haven’t made any effort to actually create that partition. The SSD was factory new when I connected it to the Jetson. I will make an attempt to verify that this partition exists, and creating it if necessary, before moving on. Unless the flashing scripts do that automatically?

Hi,

is nfs-kernel-server installed on your host?
Or check if it’s running:

I have now created the partition nvme0n1p1. nfs-kernel-server is installed, but I cannot run it because systemd is not the init system. I will update the docker commands to make sure nfs-kernel-server can run, and then post here again with results.

systemctl start nfs-server.service
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down

I am once again at a bit of a loss.

By messing around with Docker, I was able to get into the container while having systemd as the PID 1. The commands I used were

sudo docker run -it \
--privileged -d -v /dev/bus/usb:/dev/bus/usb/ -v /dev:/dev -v /media/$USER:/media/nvidia:slave \
-v $HOME/nvidia/download:/download \
-v $HOME/nvidia:/nvidia \
--name nvidiasdkmanager --network host \
--entrypoint /sbin/init \
sdkmanager

followed by a

sudo docker exec -it nvidiasdkmanager bash

Inside the container, I can verify that systemd is PID 1 by running top. However, running systemctl start nfs-server.service fails with the error:

System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down

This issue is only partially related to the Jetson, but I don’t see an obvious way around it. SDK Manager is officially distributed as a Docker image, but the Docker image does not support systemd services, yet some of the SDK Manager’s functionality depends on systemd services. My natural conclusion would be that the SDK Manager’s Docker image simply doesn’t support the use case of flashing to an SSD. If that were true, that would be a quite notable limitation that should be documented somewhere. So I am more likely to suspect that I’m doing something wrong.

I don’t have access to a computer with Ubuntu 20.04 at the moment, so installing SDK Manager “natively” (without Docker) is not an option. As far as I can tell (from https://developer.nvidia.com/sdk-manager), SDK Manager with JetPack support only works on Ubuntu 18.04 and 20.04.

Any ideas?

Then I’d really suggest finding a Ubuntu 18.04/20.04 host and flash it without using Docker.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.