Dear experts,
We recently replaced a 500GB with a 1TB NVMe in our Nvidia Jetson Xavier NX + Jetpack-5.0.2 based system and observed an extreme longer boot time (reported doubled the boot time of the 500GB NVMe case).
Below is the systemd analysis of our system using 512GB-NVMe (after disabling the nvgetty.service and also removing the ttyTCU0,115200n8 boot option from /boot/extlinux/extlinux.conf) :
$ systemd-analyze
Startup finished in 5.151s (kernel) + 30.997s (userspace) = 36.148s
graphical.target reached after 30.937s in userspace
$ systemd-analyze blame
20.008s dev-nvme0n1p1.device
19.893s docker.service
15.272s rc-local.service
15.266s snap.lxd.activate.service
10.014s gdm.service
6.712s nv-l4t-usb-device-mode.service
5.791s snapd.service
3.781s dev-zram3.device
3.253s dev-zram0.device
2.988s dev-zram1.device
2.711s dev-zram2.device
2.470s dev-zram4.device
2.411s dev-zram5.device
1.720s udisks2.service
1.686s systemd-random-seed.service
1.601s upower.service
1.291s user@1000.service
1.233s systemd-udev-trigger.service
1.203s containerd.service
1.137s nvphs.service
1.004s accounts-daemon.service
902ms dev-loop0.device
850ms nv.service
850ms dev-loop1.device
807ms alsa-restore.service
771ms dev-loop4.device
766ms networkd-dispatcher.service
756ms snap-lxd-28475.mount
743ms snap-multipass-12830.mount
725ms snap-snapcraft-12030.mount
711ms nvpmodel.service
697ms dev-loop6.device
694ms dev-loop2.device
663ms snap-core-17201.mount
662ms dev-loop5.device
648ms dev-loop3.device
630ms nvpower.service
627ms dev-loop8.device
620ms dev-loop7.device
610ms snap-lxd-26202.mount
607ms snap-multipass-11106.mount
599ms snap-core22-1383.mount
596ms snap-ros2\x2dmulti-x1.mount
564ms snap-core-16204.mount
563ms systemd-logind.service
561ms dev-loop12.device
535ms snap-core22-1035.mount
535ms dev-loop9.device
507ms dev-loop10.device
494ms snap-core20-2321.mount
479ms snap-core20-2019.mount
478ms snapd.seeded.service
472ms systemd-journald.service
466ms ModemManager.service
454ms dev-loop11.device
449ms kerneloops.service
437ms snap-snapcraft-10086.mount
431ms systemd-resolved.service
395ms apport.service
324ms NetworkManager.service
323ms ssh.service
295ms e2scrub_reap.service
282ms avahi-daemon.service
278ms packagekit.service
266ms keyboard-setup.service
257ms binfmt-support.service
254ms systemd-timesyncd.service
233ms dundee.service
232ms switcheroo-control.service
228ms systemd-udevd.service
189ms systemd-user-sessions.service
179ms systemd-modules-load.service
168ms systemd-tmpfiles-setup.service
162ms resolvconf-pull-resolved.service
158ms proc-sys-fs-binfmt_misc.mount
158ms dev-hugepages.mount
153ms colord.service
152ms dev-mqueue.mount
148ms ofono.service
147ms run-rpc_pipefs.mount
141ms sys-kernel-debug.mount
139ms polkit.service
138ms sys-kernel-tracing.mount
136ms nv_nvsciipc_init.service
136ms nvfb-early.service
130ms openvpn.service
125ms kmod-static-nodes.service
123ms modprobe@ramoops.service
123ms modprobe@chromeos_pstore.service
122ms nvfb-udev.service
122ms modprobe@pstore_blk.service
122ms modprobe@pstore_zone.service
121ms modprobe@efi_pstore.service
112ms rsyslog.service
109ms systemd-remount-fs.service
107ms wpa_supplicant.service
98ms pppd-dns.service
96ms user-runtime-dir@1000.service
56ms systemd-sysusers.service
56ms console-setup.service
53ms nfs-config.service
50ms rtkit-daemon.service
47ms systemd-update-utmp.service
46ms systemd-sysctl.service
44ms systemd-tmpfiles-setup-dev.service
38ms rpcbind.service
36ms sys-fs-fuse-connections.mount
35ms systemd-update-utmp-runlevel.service
31ms sys-kernel-config.mount
26ms systemd-journal-flush.service
25ms plymouth-read-write.service
25ms nvfb.service
17ms plymouth-quit-wait.service
16ms docker.socket
14ms setvtrgb.service
7ms snapd.socket
I will provide the analysis of the 1TB-NVMe based system later when possible. But by looking at the critical services in analysis of the 500GB based system :
20.008s dev-nvme0n1p1.device
19.893s docker.service
15.272s rc-local.service
15.266s snap.lxd.activate.service
10.014s gdm.service
do you think that the degradation in boot time depends heavily on the NVMe size?
Thanks in advance for your opinion and advice if any.
Best Regards,
Khang