AGX orin can't boot up during Initrd flash process

Yes, please also build UEFI on your X86-64 host PC.
The build operation has been verified on X86-64 Ubuntu 18.04 host PC.


I have build the uefi and use to flash and boot successfully

here’re logs:
boot uart log
uefi_debug_boot.txt (223.3 KB)
uart log during :
uefi_debug_flash_log.txt (83.7 KB)
flash log:
uefi_debug_flash_uart_log.txt (22.3 KB)

But initrd still can’t flash,it still show timeout

Here’re logs
uart log during initrd flash:
uefi_debug_initrd_uart_log.txt (70.9 KB)

initrd flash log:
uefi_debug_initrd_log.txt (199.6 KB)

From uefi_debug_initrd_uart_log.txt, It seems also stopped at UEFI but not the previous “RAS Uncorrectable Error in IOB, base=0xe010000” issue.

Does the initrd flash could be run successfully on the devkit with the same operation?

Please also try to modify the following even if you have the EEPROM on the custom board.
Jetson AGX Orin Platform Adaptation and Bring-Up - Modifying the EEPROM


  1. The initrd flash could be run successfully on the devkit with the same operation
    This problem just happen on our carrier board

  2. I have already modified the EEPROM setting to be 0X0

Is the result the same after modifying this?


Hi @zax

What was changed now and then? I mean your initrd flash log is changed again and with below error.

��ERROR: camera-ip/isp5/isp5.c:1977 [isp5_pm_init] “ERROR: Failed to turn isp power on”
BUG: core/init/init.c:85 [init_all] “*** FIRMWARE INIT FAILED AT LEVEL 95 ***”

What I cannot understand is, this error shouldn’t happen if you already changed the UPHY configuration with correct ODMDATA.

You could took this error log to search over this forum and you will see we have replied this issue before.

Yes, your’re right.

But I’ve modified the ODMDATA and surely it is as same as the adaption guide

Why the configuration 2 in document is different from the reference topic?


The adaptation guide is correct. The post we replied on forum is just to avoid the error log. But the ODMDATA from adaptation guide shall have the same effect.


I’m testing my modifications and found a problem on production version SOM

I can use initrd to flash on our carrier board with the release candidate SOM(Board ID(3701) version(RC1) sku(0004) revision(A.0)), but failed(timeout) on production SOM(Board ID(3701) version(500) sku(0004) revision(G.0))

The image is pure l4t with modifying

  1. configuration #2 ODMDATA
  2. disabled MBGE in dts
  3. enable C7 in dts and modified pinmux

here’re logs
initrd_defult_addODMc7_500_initrd_log.txt (180.8 KB)
initrd_defult_addODMc7_500_initrd_uartlog.txt (33.5 KB)
initrd_defult_addODMc7_rc1_initrd_log.txt (216.5 KB)
initrd_defult_addODMc7_rc1_initrd_uartlog.txt (100.6 KB)


After several testing and confirm, the production SOM and release candidate SOM can work finely on devkit

The reason of initrd_flash timeout could be we’ve exchanged 2 pcie clock(C0 and C7) due to hardware design on our carrier board

Is there any suggestion for solving this ?

But it is strange that RC1 version is okay on our board without enable 2 pcie(C0 and C7) simultaneously

What is the difference between production and RC1?


Is there an suggestion?

Hi zax,

Do you refer to Enable PCIe in a Customer CVB Design to modify device tree for PCIe?

So, does the issue only occur on production module (Version 500) with your custom board at this moment? and they are both work on the carrier board of the devkit?

Hi KevinFFF,

We will have new version carrier board for this issue

The following are parts of our testing

Could you please help to check if conf#2 is okay on devkit(with both RC1 and production module) or it just not make sense for devkit with conf#2?

It does not make sense to use devkit to test C0+ C7…

Hi WayneWWW,

Thanks for reply

Far for the original topic

How can I check initrd flash is okay with conf#2

What could I compare for?

You can boot it from other device like emmc or usb drive first, make sure your board can detect the nvme drive correctly.

Initrd flash is actually firstly boot into initrd and let inird to flash files to nvme drive. Thus, initrd/kernel needs to be able to detect your nvme drive first…

I’m not sure if I understand the question

But we can boot from emmc and the nvme port works normally

What is your exact problem here?

Is this issue still be you hit “ERROR: Failed to turn isp power on” when running initrd flash?

Yes, the issue occurs on production module when I use conf#2 and part of situation of RC1

The following are testings