Unable to flash Orin Nano Devkit with SDK manager or flash.sh to SD card

Hello,

As I commented on a differen open issue post, I am having problems using jetson-io.py in my Jetson Orin Nano 8GB devkit; after some discussion I was asked to reflash the board either with SDK manager or the flash command, especifically release 36.4 (different issue from this thread). I have been attempting to do it but both commands fail; I have done some testing and tried various things.

Setup:

  • Laptop with Ubuntu 22.04.4 LTS, dual boot with windows
  • USB-C cable I have used to flash different boards (I know it works properly)
  • 64GB SD card I have used before to flash an image with etcher succesfully
  • Disabled Ubuntu firewall
  • Checked NFS ports
  • Disabled USB autosuspend
  • 1TB external USB hard drive formatted in ext4 for the download folder since my host computer does not have enough memory

Procedure & attempts

SDK Manager

  1. Format the SD card with official SD Formatter
  2. Set the board on recovery mode by sorting the pins with a jumper
  3. Open SKD manager, select my board (I have tested both pop-up options after the board is detected: Jetson Orin Nano 8GB and Jetson Orin Nano 8GB Devkit Version)
  4. Select the desired components: I uncheck host components since for some reason SDK manager wants to use space in the local hard drive instead of using the selected folders (which are in the external 1TB disk mentioned earlier)
  5. Click flash and wait

Here are the full logs as well as the serial console log captured during the flash

SDKM_logs_JetPack_6.1_(rev._1)Linux_for_Jetson_Orin_Nano[8GB_developer_kit_version]_2024-12-30_09-53-56.zip|attachment (180.0 KB)
serial_log_sdk_flash.txt (89.5 KB)

Flash Command

  1. Format the SD card with official SD Formatter
  2. Set the board on recovery mode by sorting the pins with a jumper
  3. Follow Quick Start — NVIDIA Jetson Linux Developer Guide 1 documentation
    • Downloaded the corresponding version
    • Setup an .sh script with executing permisions attached below
export L4T_RELEASE_PACKAGE="Jetson_Linux_R36.4.0_aarch64.tbz2"
export SAMPLE_FS_PACKAGE="Tegra_Linux_Sample-Root-Filesystem_R36.4.0_aarch64.tbz2"
export BOARD="jetson-orin-nano-devkit"

sudo tar xf ${L4T_RELEASE_PACKAGE}
sudo tar xpf ${SAMPLE_FS_PACKAGE} -C Linux_for_Tegra/rootfs/
cd Linux_for_Tegra/
sudo ./tools/l4t_flash_prerequisites.sh
sudo ./apply_binaries.sh

sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device mmcblk0p1 \
-c tools/kernel_flash/flash_l4t_t234_nvme.xml -p "-c bootloader/generic/cfg/flash_t234_qspi.xml" \
--showlogs --network usb0 jetson-orin-nano-devkit internal

Here are both the execution output and the serial console log captured during the execution
output_script_flash.txt (154.4 KB)
serial_log_script_flash.txt (12.4 KB)

Thank you very much for your time,

Jorge

Hi,

Just some clarification for the logs your provided.

  1. For sdkmanager, it is using initrd flash tool and it seems your host PC has some setting (e.g. firewall) that makes NFS fails to mount.

Info: Either the device cannot mount the NFS server on the host or a flash command has failed. Check your network setting (VPN, firewall,…) to make sure the device can mount NFS server. Debug log saved to /tmp/tmp.Q9JBilva4x. You can access the target’s terminal through “sshpass -p root ssh root@fc00:1:1:0::2”

  1. And for your 2nd log which you runs manually, actually the flash did not start. Your host side log is still preparing the files.

populating kernel to rootfs… done.
populating initrd to rootfs… done.
populating kernel_tegra234-p3768-0000+p3767-0005-nv.dtb to rootfs… done.
Making system.img…
populating rootfs from /media/jorge/red/Linux_for_Tegra/rootfs …

It is normal that this stage took lots of time.

Thanks,

I will try to disable more things, I followed another post in here and disabled the fireweall as well as checked NFS ports were open correctly. I will update when I do the steps.

Jorge

1 Like

This looks suspicious.

I am assuming you are out of hdd room or low ram?

Are you on laptop or desktop?

I am out of ROM, I have a rather small partition for the Ubuntu boot and the “host components” dont fit, initially I had problems with the external drive but after formatting it as ext4 those errors disappeared. I am using a laptop.

I hate to say this, so much of this is so complicated its best to dedicate a ubuntu box to it. We dumped VMware a while back and started buying used Dell precision workstations. Less than $100 USD used, just get a model that supports gen 3 or better NVMe. Even a 4 core box will do fine, main thing is get one with nvme support.

Also, if the bootloader does not match the correct version, you are not going anyplace. Burn the SD with Pi-Imager, for r35.x and try that first, if that does not work burn r36.4 on SD. Only way I was able to get into mine after I botched a flash script was to us sdkmanager. If the SD card does not boot you will have to get a good sdkmanager set up going. Pretty sure they have some special code buried in it to jump start the device.

Launch the SD cards without the jumper.
With flash tool or sdkmanger connect the jumper.

Hello,

Despite checking the firewall again, there are still problems with NFS. Below I attach the log of the full execution output of the flashing script.

script_long.txt (290.0 KB)

Jorge

I dont know if I am missing something but the images to burn directly in SD dont have R36.4 (yet?) I have only been able to find 36.2 inside of the “pre-made” SD card image for Jetpack 6.1 rev 1.

Fairly certain this is the same one I used:

Just to make sure we are on the same page, when flashing to the SD card with etcher that image, the version is the following:

Jetson System firmware version 36.2.0-gcid-34956989 date 2023-11-30T18:35:35+00:

So apparently there are no “pre-packed” Jetpack versios with newer releases, thats why I am trying to flash newer version through SDK-manager / the script; with the hope that that will make the board boot after using jetson-io.py.

That is odd, I modified the kernel and device tree used flash.sh to install.

fred@orin1:~$ uname -a
Linux orin1 5.15.148-tegra #1 SMP PREEMPT Sat Dec 28 15:35:54 EST 2024 aarch64 aarch64 aarch64 GNU/Linux
fred@orin1:~$ dpkg-query --show nvidia-l4t-core
nvidia-l4t-core	36.4.0-20240912212859
fred@orin1:~$ 

jetson-io.py does work for the most part, it still does not have any gpio pin config options.

It does show my config but does not show the GPIO pins that are active. It will load the camera overlay. gpiodetect and gpioinfo still shows the pins as unused so I might not have done something correct. I do have gpio output and pwm output that is not showing up, it is however active. I have only done turn-on testing using the jetson-gpio lib in python. Have not tried it using c++ libgpiod,yet.

This is also my first Nvidia experience and assumed it would allow GPIO output/input configs in previous versions. I don’t know if this is how it should be or am I expecting something that was never present in that tool.

Also, I am no longer using SD card, this is NVMe. Initially used the SD and it was 36.4 and jetpack 6.1, used it to light up super speed.

Another note, the jetson python lib uses header pin numbers and those were all correct. So where it got the correct path is unknown at this time. Now that is cool, I am not a python person and that is so much simpler than using libgpiod and c++.

Some clarifications

  1. It would be better to always share UART + host side log when you are reporting a flash issue because flash is a process that involves 2 sides (host/device). UART log will provide device log.

  2. Some explanations about flash.sh and initrd flash. The flash process of these two are different.
    For beginner, I would suggest just use initrd flash for Orin Nano. flash.sh cannot flash any external drive. For example, it cannot flash anything to a USB drive. It is just flash.sh won’t use any NFS or usb device mode during flash process but initrd flash would.

And back to the original question, is it possible to use other ubuntu host to flash here? Is the Ubuntu in your laptop a VM?

Hello,

You are right, lets keep the discussion focused. Nevertheless the other comments are somewhat relevant. All the information regarding my setup is in my OP, nothing has changed except from the fact that I tried to flash with the script again to let it run more time, since apparently it didn’t have enought time to finish.

As I mention in my OP, I am using a host ubuntu computer, not a VM. I have disabled the firewall but NFS keeps failing.

Just to make sure we are all on the same page let me cite all the previous experiments

SDK Manager with failed NFS

  • LOGS:
  • UART OUTPUT:

flash.sh “uncomplete” attempt

  • LOGS (cmd output):
  • UART OUTPUT:

COMPLETE flash.sh ATTEMPT WITH ERROR AT THE END

  • LOGS (cmd output):
  • UART OUTPUT

serial_console_2.txt (82.0 KB)

Just in case I cite below my OP with the setup im using:

Thanks

Just some points here.

  1. You are not using flash.sh. All the tool you are using so far is initrd_flash.sh. It is not same as “flash.sh” because we have another tool called flash.sh.
    initrd_flash.sh will use “flash.sh” in the beginning and then use another method (e.g. NFS) for flashing external driver later. But flash.sh will not have NFS steps.
    You can flash your board by using flash.sh and it won’t hit any problem. However, it will only update the QSPI of your Jetson. With this method, bootloader part will be updated but your rootfs (NVMe/USB drive/SD) will not.

  2. According to the new “COMPLETE flash.sh ATTEMPT”, I don’t see any error from the UART side, which means Jetson side has no issue. The cause of this issue might still be in the host PC side.
    How did you disable firewall setting? sudo ufw disable ?

You are right, I used flash.sh as a name incorreclty, sorry for that. Yes, I used sudo ufw disable and then checked the status was inactive. I have also attempted to look at the ports and everything was configured as default.

Thanks for your answer,

Jorge.

1TB external USB hard drive formatted in ext4

Is this one /media/jorge/red/?

Yes, that’s correct

Hello,

First of all happy new year if you celebrate! Any update with my logs? I assume you are very busy but I have been fighting quite some time now, thanks!

Jorge

Sorry that I have no idea about what might be wrong with the NFS setting on your host side for now.

Please be aware that I don’t see any error from your Jetson log. And the host side log is all the same since the beginning of this post.

Do you see anything wrong related to NFS after your flash by check syslog from your host?

I repeated the flash with the initrd_flash.sh method and then piped syslog to grep looking for NFS:

Jan  2 15:57:11 salinas systemd[1]: Stopping NFS server and services...
Jan  2 15:57:11 salinas systemd[1]: nfs-server.service: Deactivated successfully.
Jan  2 15:57:11 salinas systemd[1]: Stopped NFS server and services.
Jan  2 15:57:11 salinas systemd[1]: Stopping NFSv4 ID-name mapping service...
Jan  2 15:57:11 salinas systemd[1]: Stopping NFS Mount Daemon...
Jan  2 15:57:11 salinas systemd[1]: Condition check resulted in RPC security service for NFS client and server being skipped.
Jan  2 15:57:11 salinas systemd[1]: Condition check resulted in RPC security service for NFS server being skipped.
Jan  2 15:57:11 salinas systemd[1]: nfs-idmapd.service: Main process exited, code=exited, status=1/FAILURE
Jan  2 15:57:11 salinas systemd[1]: nfs-idmapd.service: Failed with result 'exit-code'.
Jan  2 15:57:11 salinas systemd[1]: Stopped NFSv4 ID-name mapping service.
Jan  2 15:57:11 salinas systemd[1]: nfs-mountd.service: Deactivated successfully.
Jan  2 15:57:11 salinas systemd[1]: Stopped NFS Mount Daemon.
Jan  2 15:57:11 salinas systemd[1]: Starting NFSv4 ID-name mapping service...
Jan  2 15:57:11 salinas systemd[1]: Starting NFS Mount Daemon...
Jan  2 15:57:11 salinas systemd[1]: Started NFSv4 ID-name mapping service.
Jan  2 15:57:11 salinas systemd[1]: Started NFS Mount Daemon.
Jan  2 15:57:11 salinas systemd[1]: Starting NFS server and services...
Jan  2 15:57:11 salinas kernel: [63582.656193] NFSD: Using nfsdcld client tracking operations.
Jan  2 15:57:11 salinas kernel: [63582.656197] NFSD: no clients to reclaim, skipping NFSv4 grace period (net f0000000)
Jan  2 15:57:11 salinas systemd[1]: Finished NFS server and services.
Jan  2 16:25:35 salinas systemd[1]: Stopping NFS server and services...
Jan  2 16:25:35 salinas systemd[1]: nfs-server.service: Deactivated successfully.
Jan  2 16:25:35 salinas systemd[1]: Stopped NFS server and services.
Jan  2 16:25:35 salinas systemd[1]: Stopping NFSv4 ID-name mapping service...
Jan  2 16:25:35 salinas systemd[1]: Stopping NFS Mount Daemon...
Jan  2 16:25:35 salinas systemd[1]: Condition check resulted in RPC security service for NFS client and server being skipped.
Jan  2 16:25:35 salinas systemd[1]: Condition check resulted in RPC security service for NFS server being skipped.
Jan  2 16:25:35 salinas systemd[1]: nfs-idmapd.service: Main process exited, code=exited, status=1/FAILURE
Jan  2 16:25:35 salinas systemd[1]: nfs-idmapd.service: Failed with result 'exit-code'.
Jan  2 16:25:35 salinas systemd[1]: Stopped NFSv4 ID-name mapping service.
Jan  2 16:25:35 salinas systemd[1]: Starting NFSv4 ID-name mapping service...
Jan  2 16:25:35 salinas systemd[1]: Started NFSv4 ID-name mapping service.
Jan  2 16:25:35 salinas systemd[1]: nfs-mountd.service: Deactivated successfully.
Jan  2 16:25:35 salinas systemd[1]: Stopped NFS Mount Daemon.
Jan  2 16:25:35 salinas systemd[1]: Starting NFS Mount Daemon...
Jan  2 16:25:35 salinas systemd[1]: Started NFS Mount Daemon.
Jan  2 16:25:35 salinas systemd[1]: Starting NFS server and services...
Jan  2 16:25:35 salinas kernel: [65286.968812] NFSD: Using nfsdcld client tracking operations.
Jan  2 16:25:35 salinas kernel: [65286.968815] NFSD: no clients to reclaim, skipping NFSv4 grace period (net f0000000)
Jan  2 16:25:35 salinas systemd[1]: Finished NFS server and services.

Thanks for your time,

Jorge