Install JetPack 6 DP on a newly installed nvme SSD

Hi all,

I just installed a NVME SSD on a AGX Orin devkit.
I would like to install the JetPack 6 DP on it and boot from it.
I am using the CLI from the docker image “sdkmanager-2.0.0.11405-Ubuntu_22.04_docker.tar.gz”.
Now I am wondering which command line option I should use to install/flash on the new nvme device.
Is anybody could help me on this one ?

Thanks in advance for your help :)

Best regards

I have done some progress by create a response file.
Unfortunately, I am getting the following error message after launching the install:

Error: Disk space check failure…

after using the following command on a debian host:

docker run -it --privileged -v /dev/bus/usb:/dev/bus/usb/ -v /dev:/dev -v /media/$USER:/media/nvidia:slave --volume /media/Downloads:/media/Downloads --rm sdkmanager --cli --response-file /media/Downloads/sdkm_responsefile_sample_jetson.ini

I have no idea what you are doing here.
At least put the full log here.

Is docker strictly required in your workflow?

Hi DaveYYY and all,

Thanks for your reply.

I have installed a SSD in the AGX Orin in the M2 slot.
What I am trying to achieve is to use the SSD only, ie install the JetPack (Linux, and other component) on the SSD, and then boot from it (without using the eMMC memory).
The SSD has been formatted in ext4 but it should not matter.

My host system is the latest Debian. According the matrix shown in the following link:

I have tried the sdkmanager on my host and it does not work: it cannot connect to the devzone.

Then I followed the link:
https://docs.nvidia.com/sdk-manager/docker-containers/index.html
for matching the matrix requirement and the docker-ed version of the sdk manager.

For testing faster, I used the .ini file mentioned in
https://docs.nvidia.com/sdk-manager/sdkm-command-line-install/index.html

and I end up running the following command:
docker run -it --privileged -v /dev/bus/usb:/dev/bus/usb/ -v /dev:/dev -v /media/$USER:/media/nvidia:slave --volume /media/Downloads:/media/Downloads --rm sdkmanager --cli --response-file /media/Downloads/sdkm_responsefile_sample_jetson.ini

I am getting the following screen:

Then I am getting the error mentioned previously

To answer your question:

At least put the full log here.
The only log that I am having is: Error: Disk space check failure…
Any ideas how I could get more logs ?

Is docker strictly required in your workflow?
According the nvidia documentation: yes.

Regards

docker is only required when you are using SDK Manager.
If you are flashing with command lines, then I think most Linux distros should do it:

sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device nvme0n1p1
-c tools/kernel_flash/flash_l4t_external.xml -p “-c bootloader/generic/cfg/flash_t234_qspi.xml”
–showlogs --network usb0 jetson-agx-orin-devkit internal

https://docs.nvidia.com/jetson/archives/r36.2/DeveloperGuide/

1 Like

I would keep docker out of the equation. It is not needed.

Prepare a Host machine with Ubuntu 22.04. As I understand it the generation of the images require uids and gids (and maybe other things) to be identical between host and target. So the safest choice is Ubuntu 22.04, maybe 20.04. No other distribution! No virtual machine, just bare metal. amd64/x86-64 architecture required. Just follow these instructions to the dot. The machine must have full internet access. Install sdkmanager and perform the flash. Should work then.

1 Like

Hi DaveYYY,

Big thanks for your help.

I have modified the flash_l4t_external.xml to fit the nvme disk, but this is still failing.
I tried to investigate and read a lots to end up using the following command:

./tools/kernel_flash/l4t_initrd_flash.sh --external-only --external-device nvme0n1p1 -c tools/kernel_flash/flash_l4t_external.xml -p “-c bootloader/generic/cfg/flash_t234_qspi.xml” –showlogs --network usb0 jetson-agx-orin-devkit external

But still getting stuck :/


Existing pscbl1file(/home/doe/Projects/nvidia/Linux_for_Tegra/bootloader/psc_bl1_t234_prod.bin) reused.
Existing mtsmcefile(/home/doe/Projects/nvidia/Linux_for_Tegra/bootloader/mce_flash_o10_cr_prod.bin) reused.
Existing tscfwfile(/home/doe/Projects/nvidia/Linux_for_Tegra/bootloader/tsec_t234.bin) reused.
Existing mb2applet(/home/doe/Projects/nvidia/Linux_for_Tegra/bootloader/applet_t234.bin) reused.
Existing bootloader(/home/doe/Projects/nvidia/Linux_for_Tegra/bootloader/mb2_t234.bin) reused.
bl is uefi
Making Boot image… done.
Not signing of boot.img
Making recovery ramdisk for recovery image…
Re-generating recovery ramdisk for recovery image…
cp: cannot stat ‘’: No such file or directory
failed command: cp -f .cpio.gz
Error: /home/doe/Projects/nvidia/Linux_for_Tegra/bootloader/signed/flash.idx is not found
Error: failed to relocate images to /home/doe/Projects/nvidia/Linux_for_Tegra/tools/kernel_flash/images
Cleaning up…

Any idea ?
Sorry to be slow :/

Hi again,

@fchkjwlsq: Thanks for the tips. I will consider this option when I am sure to be stuck :).

BR

Flash done !!! Big thanks !!

However the system does not start (black screen for very long time) after selecting the ssd for boot.

Dump the log, please.

1 Like

Hi,

Here are the logs …

nvme_log.txt (112.6 KB)

Kind of interesting journey :)

Thanks again for your help !!

PS: forgot to mention that I had to replace python2 by dh-python in the l4t_flash_prerequisities.sh since the former is not existing anymore in latest version …

Why are you doing this? I don’t think it’s required.
Please also put the full flashing log here.

Do you have the file initrd under Linux_for_Tegra/rootfs/boot/?
Please delete everything and extract those packages again.

Do you have the file initrd under Linux_for_Tegra/rootfs/boot/?
Please delete everything and extract those packages again.

This done already done :)
Now I end-up with the block screen. I upload the log from the serial console in a text file in my last post…

I mean you may have done something wrong by modifying stuff in the BSP.
Please delete everything and flash with a clean BSP again.

Also, is it able to boot up with display from eMMC?

yes this is still booting from the eMMC.

Then please flash the NVMe again with a clean BSP.

Hi,

Sorry for the lag. I was trying to understand the logs.
Hi suggested, I have tried to flash from clean BSP as requested.
I still have the same issue, it got stuck during start up from nvme.
Curiously, I do not know if this is a matter of buffering, but I am not getting the same logs everytime I am restarting :/
nvme_new_from_scratch.txt (68.8 KB)
nvme_new_from_scratch_2.txt (56.1 KB)
nvme_new_from_scratch_3.txt (65.9 KB)

I tried to compare to the previous jetpack present in the emmc but I did not find any clue.
boot_emmc.txt (54.7 KB)

Can you please clean up the log?
It contains a lot of control characters and is very hard to read.

Anyway, how did you flash it?

Jetson UEFI firmware (version 2.1-32413640 built on 2023-01-24T23:12:27+00:00)

This means the UEFI is still 35.2.1, and you want it to boot a storage device with 36.2, which definitely will not work. I don’t believe the flash to NVMe was successful, and I have no idea what you are doing.
Please re-flash both eMMC and NVMe with 36.2.

1 Like

Oki I understand now, I missed the fact that the firmware need to match. Big thanks.
For testing, I have flashed on the nvme disk with the same version than the emmc card and this is working.
I will attempt to move everything to 36.2 now.
Will keep you in touch in any case.

1 Like