Enter OEM interface after reboot

You have to use flash command without “-r” parameter to re-build system.img for applying the content from l4t_create_default_user.sh.

This command needs super user (sudo) permission to work.

It looks worked at first boot but might show up OEM interface again after another reboot (or power outage) in your case.
I’m trying to reproduce this issue on devkit, but I’ve not seen it.

Do you modify anything in rootfs?
Or the method (sync) from @linuxdev could help?

  1. I know the difference between - r in the flush command. I first compile the image on the server to generate an img file and then download it to the local computer. Then I use the - r parameter to flush the computer. The rootfs in the image have already used the script for creating users.

  2. This is not a problem that can be copied immediately. My suggestion is to start from logs.

i use sudo

Could you help to confirm the following items:

  1. Is the following message missing when the issue occurs?
    [ 30.969689] 1]: Started Forward Password Requests to Wall Directory Watch.

  2. Does the content in /etc/gdm3/custom.conf change when the issue occur?

  3. Use the source in JetPack_5.0.2_Linux_JETSON_AGX_ORIN_TARGETS/Linux_for_Tegra w/o other modification
    1). export your userName, userPasswd, hostName
    2). $ sudo ./tools/l4t_create_default_user.sh -u $userName -p $userPasswd -a --accept-license -n $hostName
    3). $ sudo ./flash. sh jetson-agx-orin-devkit mmcblk0p1

Thank you.

  1. Where can I read this message? Is it in the serial port startup log? I’ve uploaded it, but I can’t find it.

  2. No change

Can I find problems in my blog?

I did make many changes in rootfs. Because this is the image we need to customize

This message could be found in UART console log. (like dmesg-96.7KB you uploaded before)

Do you customize the rootfs on the devkit or custom board?
If you are able to flash clean JetPack5.0.2 with l4t_create_default_user.sh, I think that would be helpful to clarify if modification in rootfs causes this issue.

This problem occurs on both custom boards and devkit

From the failure log, we can see that there is no such information. What causes the absence of such information?

The failed log has been uploaded
“Under normal conditions:
boot.log (107.3 KB)“

To be more specific, if it is because of the problem of customizing rootfs, why does this sometimes happen and sometimes does not? So it’s strange where the problem occurred.

What’s the meaning of “under normal conditions”?

That is a service from systemd that queries the user for system passwords.
systemd-ask-password-console.service

I still cannot reproduce this issue on devkit with clean JetPack5.0.2.
We’ve done reboot stress test without hitting this issue.
Could you provide the reproduce steps on devkit (including modification in rootfs)?

I don’t know if this helps, but thought I’d describe some of the details which might be useful when building a custom image. Everything might already be ok related to this, but hopefully this clarifies.

If you run the command “cat /proc/cmdline”, does it have the word “quiet” in it anywhere? If so, then this tells logging to really cut back on verbosity once Linux is reached. There might be other steps related to verbosity in boot software prior to Linux loading (and this might again differ between a pre-5.x JetPack and 5.x+). If you see “quiet” anywhere in “/boot/extlinux/extlinux.conf”, then remove this. It might go away in “cat /proc/cmdline”.

Notice also that some of that command line passed to the kernel at boot can come from the device tree (which earlier boot stages might use and/or edit). Specifically, if you want to see the device tree as the Linux kernel sees it at the moment it loads, you can examine “/proc/device-tree”. The specific part which becomes part of boot arguments, and which is also seen by boot stages, is “chosen->bootargs”. So check this out, and see if “quiet” is in this (remember that boot stages might need other changes to increase boot log verbosity, this is only part of it):
cat /proc/device-tree/chosen/bootargs
(there is no return at the end of the line so you’ll have to hit the enter key after that “cat”)

During normal flash there is content written which is equivalent to what a PC would have for a BIOS (Jetson’s don’t have a BIOS). Other content is flashed for bootloader and boot environment (in JP 5.x+ it is UEFI). Only the rootfs/APP partition is for regular boot content. This latter is the part which normally gets customized, unless there is a custom carrier board (interesting differences include GPIO pins with a different function, alternate lane routings, so on…layout differences).

If there is a custom carrier board, then it is possible the device tree might have modifications which are used during boot prior to the Linux kernel loading. An example would be that non-plug-n-play devices which cannot report a physical address to find hardware at require telling the driver how to find that device via the device tree. If using a dev kit you don’t need to modify this in most cases.

In a failure case log I notice this:

[   33.685787] tegra-xudc 3550000.xudc: failed to get usbphy-0: -517
[   33.955458] tegra-xudc 3550000.xudc: failed to get usbphy-0: -517
[   33.975143] tegra-xudc 3550000.xudc: failed to get usbphy-0: -517
[   34.629278] tegra-xudc 3550000.xudc: failed to get usbphy-0: -517

The above is suspicious that the driver could not use the PHY because it had incorrect information passed to it. For example, an address. Another possibility related to device tree is indirect: If the power rail related to that PHY is down, then this might also be a cause of such an issue. In most other cases, if the PHY were actually found and failing at an attempt to run USB it would be a signal quality issue (e.g., trace impedance), but the message I see says that it never got that far. Thus device tree is suspicious. If this is a device tree, then it might be a problem beyond just the USB (for example, if it is a power rail not running, then that power rail might be shared among devices, and this is only one symptom).


If I were building a custom system, and if it is based on the same Ubuntu distribution, then one would still have to run the “sudo ./apply_binaries.sh” script to get various packages and drivers into the “Linux_for_Tegra/rootfs/” area before generating an image. This is normally done automatically by JetPack/SDK Manager and is only run manually if you’ve manually unpacked the rootfs. In the case of a custom rootfs based on Ubuntu, then you might need to run this to get some of the first boot content and drivers added. I don’t know if this would work in your case as I have no idea what is in your image. However, it would likely be needed for USB and many other hardware devices to function, and also before using the “tools/l4t_create_default_user.sh” would matter.

Just for the sake of argument, let’s say that you’ve run a default flash:
sudo ./flash.sh jetson-agx-orin-devkit mmcblk0p1
(which generates “bootloader/system.img” based on some content added to “rootfs/boot/” via the “jetson-agx-orin-devkit.conf” file, but is otherwise verbatim that content)

You could then update the system on the Jetson, make modifications, and then clone this. The clone could be used to replace “bootloader/system.img”, and this flash with the “-r” you mentioned would flash all of the non-rootfs content, plus your image:
sudo ./flash.sh -r jetson-agx-orin-devkit mmcblk0p1
(which does not modify the image in any way)

You could in fact though have taken the default image (the raw one, not the sparse one):
bootloader/system.img.raw
…and then loopback mounted this, modified it, replaced “system.img” with “system.img.raw” (after modification), and flashed with “-r” and also got a verbatim image of your modifications of the default partition.

Depending on what your image has in extlinux.conf, and what device tree is used (there is both a partition device tree and a rootfs device tree…the one used depends on extlinux.conf and security fuses), you might get different kernel command lines and device tree function since it could be pulling from one of two places if you have not guaranteed which device tree is used. For that you need a fully verbose boot log.

However, if you command line flash, then this too will tell you (in combination with extlinux.conf) a lot about what is used. You could log flash like this:
sudo ./flash.sh -r jetson-agx-orin-devkit mmcblk0p1 2>&1 | tee log_flash.txt

You could export the final device tree to compare with what you think it is and look for differences:
dtc -I fs -O dts -o extracted.dts /proc/device-tree

If your custom board’s layout is a duplicate of the dev kit, then you are all set so far as device tree goes. If not, then look closely at why the USB PHY was failing by examining the device tree and comparing with the default tree. You’ll know the default tree if you log command line flash and also post extlinux.conf (assuming security fuses are not burned).

1、‘/proc/cmdline’ and “/boot/extlinux/extlinux.conf do not have “quit”。


2、cat /proc/device-tree/chosen/bootargs

3、This log will also be printed in the case of normal boot. When I say normal boot (reboot), I mean that the boot skips the OEM interface and directly enters the desktop.
In a failure case log I notice this:

[   33.685787] tegra-xudc 3550000.xudc: failed to get usbphy-0: -517
[   33.955458] tegra-xudc 3550000.xudc: failed to get usbphy-0: -517
[   33.975143] tegra-xudc 3550000.xudc: failed to get usbphy-0: -517
[   34.629278] tegra-xudc 3550000.xudc: failed to get usbphy-0: -517

4、This happens in devkit as well, not just in custom boards.

1, “under normal conditions” means that the OEM interface appears after the restart. I’ve said this many times. I stress again that my problem is that I rebooted and entered the OEM interface after brushing. This is my problem and I think it is not normal.

2, yes, I also encountered this situation, the same device, the same image, some devices in the case of repeated reboots will not appear this problem, and some devices will encounter this problem.

I think we should start with the log, the log print message is very strange, find the root cause.

This issue would be only checked if we can reproduce it. Checking your log may not be help here.

If you can provide a method that can stably reproduce this issue, then we can help check.

I can’t reproduce this phenomenon steadily now, sorry, let me think about it again, can you help me think about it again, really thank you.

Hello, I encountered this problem again.

I have participated in an online seminar in October before. It was said that ORIN DEVKIT restart will have a certain probability of failure. What is the phenomenon? Is it the same as my phenomenon?

I don’t know about the issue you said.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.