How to Boot from USB Drive?

Wow, okay so my brain is still processing all this. I don’t have the Jetson up right now so I’m just talking out loud. Is it possible to convert the working kernel partition into an image that I can replace /boot/Image with on my eMMC (and NVMe for that matter)? How is the backup kernel partition working when seconds before in the flash process it would have dumped the same thing to the APP partition (although apparently corrupted?)? How is the backup kernel partition working after I updated the kernel in the OS? Wouldn’t there be a mismatch in versions or did this backup kernel partition on the non-rootfs drive (eMMC) somehow get updated by the OTA updates? If the backup kernel partition gets modified by the OTA updates then why doesn’t the /boot folder on the eMMC also get updated (negating the need for me to copy the updates from the NVMe /boot before rebooting… although I guess if the kernel on the eMMC APP partition was never read from then my copy operation was more or less ignored in this case… but still how did the kernel partition on the eMMC get updated with the OTA updates then… did it not get updated… it says it’s updated)? I’m quite confused as to the connotations of these UART findings

Also not sure what this means… my kernel was updated successfully without issue. Which is why I’m further confused as to how the OTA updates could have modified this backup kernel partition but left the eMMC APP partition alone (although maybe this is because the system didn’t know about the APP partition kernel location but did know about the location of the working and booted backup kernel partition… and hence updated that)

IMO, firstly need to ask if formatting the emmc with sdkm is an option for you? I would like to deal with why even sdkm would cause file system error here. I don’t think you should dig into those kernel back up or something else when the system is already messed up.

As for your question, I think OTA would update both of them. But OTA would not expect cboot not able to read your file system.

I will check my rel-32.5 device first to make sure the default xavier really reads the extlinux.conf from emmc rootfs …

My xavier device log was still the old one (rel-32.4.4). Not yet checked the 32.5 log.

So didn’t the flash I performed yesterday (to switch the rootfs to the NVMe) technically reformat the entire eMMC and re-write all the partitions and everything in there? I’m pretty sure I saw that happening in the log on the host. And I had USB issues before that and continue to have them now after that. Could it be that the images/packages that the sdkm downloaded are themselves partially corrupted (still confused about the working backup kernel partition and corrupted /boot/Image if they came from the same place)? Does the sdkm do like a checksum on the original downloads?

So didn’t the flash I performed yesterday (to switch the rootfs to the NVMe) technically reformat the entire eMMC and re-write all the partitions and everything in there?

Yes, it should. Sorry about that. I am used to switching between topics here so may forget about what you’ve tried after reading too many topics from others…
So far, I guess the case is when “cboot” tries to read file system from any storage on your side, it has problem. However, when “kernel” tries to read same file system, it passes.

I have such guess because it sounds like all the file systems here for usb/nvme are cloned from emmc, right? So if emmc is corrupted from the beginning, it happens to usb too.

Please give me sometime to discuss with internal team. Engineers are from different timezone so need your kind patience.

1 Like

In the meanwhile waiting for their feedback, could you also try to remove the driver package installed by your sdkm and let it download/ flash again? I mean removing the BSP on host side and let sdkm do a clean download again.

or you can also directly download the BSP and rootfs and set it up manually from our dlc. (no sdkm required and no need to remove anything, it is just separate files)

The “quick start guide” on the page would also teach you the steps.

1 Like

Okay back at it. Connection is a bit spotty as I’m currently traveling but I’ll re-download everything as soon as I’m able. In the mean time I generated the md5 checksums for all files the sdkm already downloaded before. If anyone has time to run the same command and post your results I would really appreciate it.

For those wondering what this is all about, comparing these md5 checksums with md5 checksums from known working downloads will help me identify corrupted downloads on my end (with only the very rare exception of hash collisions and assuming other people do not also have corrupted downloads).

Command: need to change hogank and/or download location if different

find /home/hogank/Downloads/nvidia/sdkm_downloads/ -type f -exec md5sum “{}” + > /home/hogank/Desktop/sdkm_downloads_md5.txt

Results: located in /home/hogank/Desktop/sdkm_downloads_md5.txt (removed all the path prefixes below for brevity)

13fd4a30819dc1276ca4cbb1425ddd72 libnvonnxparsers7_7.1.3-1+cuda10.2_arm64.deb
a75c6180271afbffb786229a93f7cefe libnvinfer-doc_7.1.3-1+cuda10.2_all.deb
a151651f2aa625622b96909d9ced3029 nsight-systems-cli-2020.5.3_2020.5.3.17-1_arm64.deb
608a3b21efc81574abd41bfee8947ba9 libnvinfer-dev_7.1.3-1+cuda10.2_arm64.deb
8ad8bf9d497a753fd38c34bed375b833 libnvparsers7_7.1.3-1+cuda10.2_arm64.deb
ad5a4d2513ec0ed2f743afb948adb9b0 OpenCV-4.1.1-2-gd5a58aa75-aarch64-dev.deb
4c0b65a88ce83049d8811bae4fafddbe libvisionworks-sfm-repo_0.90.4.501_arm64.deb
cef544ada8bb548e9960987e930cb6fe nvidia-docker2_2.2.0-1_all.deb
5f72ea29de7fc6447a8d318e422db46a nvidia-container-runtime_3.1.0-1_arm64.deb
afa252ef5d96ab58e6c3672c6b5f90ed nvidia-container-csv-visionworks_1.6.0.501_arm64.deb
d210e7990c507f4d3ef1f2762e936e00 vpi-cross-aarch64-l4t-1.0.15-x86_64-linux.deb
68efa7782a6b0f927424b00ba848704b libcudnn8-doc_8.0.0.180-1+cuda10.2_arm64.deb
beea9cf0580265cf85b41b61e202d023 vpi-dev-1.0.15-cuda10-x86_64-linux.deb
af1138c048f628766f7d7910fe3252bd Jetson_Linux_R32.5.1_aarch64.tbz2
c593f85f62ffd61120d6fbc34006f219 libnvparsers-dev_7.1.3-1+cuda10.2_arm64.deb
478fe3b81baf2a36a5c62b142e47aa52 libvisionworks-sfm-repo_0.90.4.501_amd64.deb
e19ba074d111c0af9f0bf7f4fcdfbfa7 OpenCV-4.1.1-2-gd5a58aa75-aarch64-libs.deb
adfa4a7c800eebe32fd19d42db2ee631 libnvinfer-samples_7.1.3-1+cuda10.2_all.deb
c512ce9407a86ec484ad0f0b81c79e42 python3-libnvinfer_7.1.3-1+cuda10.2_arm64.deb
94dd40f69c070c60c9afe172823ae3d7 vpi-dev-1.0.15-aarch64-l4t.deb
437971a5798e78a22d4a4c10ba3b1502 cuda-repo-cross-aarch64-10-2-local-10.2.89_1.0-1_all.deb
046ab7b58b5d8ba61090a1566d2b0068 NsightSystems-linux-public-2020.5.3.17-0256620.deb
30825790ca61fc8ae67e238c71ba31a6 libvisionworks-tracking-repo_0.88.2.501_amd64.deb
774350dc066a328a7f47d222385ef3a9 libcudnn8_8.0.0.180-1+cuda10.2_arm64.deb
9b76a8f2c4d53fa7cad48bc7cb4f2257 nvidia-container-csv-cudnn_8.0.0.180-1+cuda10.2_arm64.deb
d41d8cd98f00b204e9800998ecf8427e cuda-repo-ubuntu1804-10-2-local-10.2.89-440.40_1.0-1_amd64.deb.mtd
6804d700f424a55073cd8d8b39d5ca19 vpi-lib-1.0.15-aarch64-l4t.deb
073c73662cd7ae907284b1d9ba67a838 vpi-demos-1.0.15-aarch64-l4t.deb
e7e779c024f2d2d956378a67ebfea8b4 sdkml3_jetpack_l4t_451.json
0716042b14e58a76d0426fa6fafd556a libnvinfer-plugin-dev_7.1.3-1+cuda10.2_arm64.deb
7d61fb25722287bcece496008b3a9e8b deepstream-5.1_5.1.0-1_arm64.deb
5969fc1376ac31cbadfd42276e271ffd OpenCV-4.1.1-2-gd5a58aa75-aarch64-samples.deb
bc75cded1a32d5871f0e18a7dfa4a0c0 tensorrt_7.1.3.0-1+cuda10.2_arm64.deb
bebb557cfa3d62d99a6f3615230f79dd python-libnvinfer-dev_7.1.3-1+cuda10.2_arm64.deb
291e2efb3286eafd24540dbcb17ebff6 libcudnn8-dev_8.0.0.180-1+cuda10.2_arm64.deb
7bd05af24165d06e0463bc09ae75c28a libnvinfer-plugin7_7.1.3-1+cuda10.2_arm64.deb
4d956d853d09c2f06e4bcbf7a7505318 uff-converter-tf_7.1.3-1+cuda10.2_arm64.deb
6c25179690472476d6b8ac94578a6289 libvisionworks-tracking-repo_0.88.2.501_arm64.deb
e6eee2ab7d1a73b0567516bca1cafd72 libnvinfer-bin_7.1.3-1+cuda10.2_arm64.deb
bd5924c9624e029348313db155e0e395 sdkml3_jetpack_l4t_451_deepstream.json
259a2c8dc34ba4efe39217379bd02e0a nvidia-l4t-jetson-multimedia-api_32.5.1-20210219084708_arm64.deb
286beb1a09da17efcef278930384f86b cuda-repo-ubuntu1804-10-2-local-10.2.89-440.40_1.0-1_amd64.deb
056ea0183c8a779ed097f37d2b46fb4f libvisionworks-repo_1.6.0.501_amd64.deb
809212ce67fa634ceb10731329370862 python3-libnvinfer-dev_7.1.3-1+cuda10.2_arm64.deb
9cbfb8d8c7c4d678532a4f811de13fe6 libnvonnxparsers-dev_7.1.3-1+cuda10.2_arm64.deb
8a0fdf5ea469231f2f623626054eb4e3 python-libnvinfer_7.1.3-1+cuda10.2_arm64.deb
61c91707eb80f334614e5da624738ea0 vpi-samples-1.0.15-cuda10-x86_64-linux.deb
9db88f13e55751329d16d6d6ec0ae4ac OpenCV-4.1.1-2-gd5a58aa75-aarch64-licenses.deb
f7a3983260a0f22d0b3fd84829952794 libnvidia-container-tools_0.9.0_beta.1_arm64.deb
d6f6b1b3ad06965caca5ca792c43aa43 cuda-repo-l4t-10-2-local-10.2.89_1.0-1_arm64.deb
c20e09428744046b3fad6534a170e3df vpi-samples-1.0.15-aarch64-l4t.deb
33805bde17712e09f447c5e059dcb082 Tegra_Linux_Sample-Root-Filesystem_R32.5.1_aarch64.tbz2
dadb934c844aaaab1f675420c3467b00 graphsurgeon-tf_7.1.3-1+cuda10.2_arm64.deb
c343f119238659c724732da212163e83 libnvidia-container0_0.9.0_beta.1_arm64.deb
ceb03d4e65e8cb8dc591238c1307f074 NVIDIA_Nsight_Graphics_L4T_Public_2020.5.20329.deb
be45a916e63dbe5be853db2c5c57cf62 nvidia-container-csv-tensorrt_7.1.3.0-1+cuda10.2_arm64.deb
eb3cb607f89f319a1125ca09ca2b8141 nvidia-container-csv-cuda_10.2.89-1_arm64.deb
48cc6c5935636fafb6b2f1078f721cbc nvidia-container-toolkit_1.0.1-1_arm64.deb
299622cc9331a15a8ac65beb7ec4579e libnvinfer7_7.1.3-1+cuda10.2_arm64.deb
a18df12c5576b0a53f8242ee3a1e5900 vpi-lib-1.0.15-cuda10-x86_64-linux.deb
d622b7d729b62ecee44c9e7d289918cb OpenCV-4.1.1-2-gd5a58aa75-aarch64-python.deb
6232660acb8305fc6e52125067fc0008 libvisionworks-repo_1.6.0.501_arm64.deb
f409dc07b205d291cc86678c826e5156 vpi-demos-1.0.15-cuda10-x86_64-linux-ubuntu1804.deb

Found the sha1 checksums for the latest release (undocument but mirrors previous releases listed in the Jetson Download Center) at this url. If anyone prefers sha1 for any reason I’ve updated the command and results below. The sha1 checksums listed for the release only contain a subset of the files the sdkm downloaded so I can only compare a few. I would still appreciate it if someone would run this on their end and post their results.

Command: need to change hogank and/or download location if different

find /home/hogank/Downloads/nvidia/sdkm_downloads/ -type f -exec sha1sum “{}” + > /home/hogank/Desktop/sdkm_downloads_sha1.txt

Results: located in /home/hogank/Desktop/sdkm_downloads_sha1.txt (again removed all the path prefixes below for brevity)

ad2b3a674302ffeccb1377f5914bbef8c13a76c2 libnvonnxparsers7_7.1.3-1+cuda10.2_arm64.deb
0f4b41b8b583c1060f784c759de39b2649777dc3 libnvinfer-doc_7.1.3-1+cuda10.2_all.deb
fbe2fbd84b97ce5d28b0c9a047237a26d2a9c1d3 nsight-systems-cli-2020.5.3_2020.5.3.17-1_arm64.deb
104386e74d332ed2768087d5f7f086b3a80b8e01 libnvinfer-dev_7.1.3-1+cuda10.2_arm64.deb
12f33966cb652c0333cd123d626833b8691450e1 libnvparsers7_7.1.3-1+cuda10.2_arm64.deb
a76514250c6e700f259f571cdf88bc8a6ab91fbb OpenCV-4.1.1-2-gd5a58aa75-aarch64-dev.deb
7638c1ea2130c315b73870816b42f99e50ebe064 libvisionworks-sfm-repo_0.90.4.501_arm64.deb
83eea5bd7fabda59305c8f0951dee57151197023 nvidia-docker2_2.2.0-1_all.deb
73826f2c150dfacc12f6393c783f71e7750d8a55 nvidia-container-runtime_3.1.0-1_arm64.deb
da958d4c490e363e8bf62b2d315e53b3997507b3 nvidia-container-csv-visionworks_1.6.0.501_arm64.deb
7c780667d924e02acd1a85cd199818b4516e987b vpi-cross-aarch64-l4t-1.0.15-x86_64-linux.deb
a6efd123c91cc42f77343d61acbcbf423c2d4926 libcudnn8-doc_8.0.0.180-1+cuda10.2_arm64.deb
c101f9001af7b18a33a4a0c18082bb1a53275ef6 vpi-dev-1.0.15-cuda10-x86_64-linux.deb
9d95b2a1e647d71b32257e90990f2a582bd9e0ec Jetson_Linux_R32.5.1_aarch64.tbz2
f4e87e71d04e639e5b21ee20dfe2ce9ea8d4e92a libnvparsers-dev_7.1.3-1+cuda10.2_arm64.deb
e91175eb0e3d122060a7a3e335b132dda5288360 libvisionworks-sfm-repo_0.90.4.501_amd64.deb
4038573782e90a1c5e0d766257d3d69b915a60bb OpenCV-4.1.1-2-gd5a58aa75-aarch64-libs.deb
01d196d195839e8873ef4154e5cd0169b572ef5f libnvinfer-samples_7.1.3-1+cuda10.2_all.deb
7b6877c8b48670d3a7296508b2b87b56d775cc5a python3-libnvinfer_7.1.3-1+cuda10.2_arm64.deb
9b31ac4299580bddf67fa439ce57e1639a3b58c6 vpi-dev-1.0.15-aarch64-l4t.deb
a50b3ddd5c907ed685df0fb81ce84788ccfc6436 cuda-repo-cross-aarch64-10-2-local-10.2.89_1.0-1_all.deb
fde6965b6f079e27e9ac8f0f0eda6bbc9e76ce4d NsightSystems-linux-public-2020.5.3.17-0256620.deb
14c659d05d5cbf814161a74285878774e3ab2645 libvisionworks-tracking-repo_0.88.2.501_amd64.deb
d984785d62fcebb54bb131e8eecd35092d3fc6c4 libcudnn8_8.0.0.180-1+cuda10.2_arm64.deb
1a91a2db048da9e9ee8d87e24a12aba815c75391 nvidia-container-csv-cudnn_8.0.0.180-1+cuda10.2_arm64.deb
da39a3ee5e6b4b0d3255bfef95601890afd80709 cuda-repo-ubuntu1804-10-2-local-10.2.89-440.40_1.0-1_amd64.deb.mtd
e9e1987f32b8bed269c4b3c805f7683d150f0ef4 vpi-lib-1.0.15-aarch64-l4t.deb
e43cf8dfc8739bf38b9fdaecb8086f9a21acbd2a vpi-demos-1.0.15-aarch64-l4t.deb
8264f8c945195e9ab211ebf6fa71e326bfef80f6 sdkml3_jetpack_l4t_451.json
570a91ec21c073227951cd0083542dd083a5771a libnvinfer-plugin-dev_7.1.3-1+cuda10.2_arm64.deb
91dca9f0935c3c2eb142104cd09002c5103739e9 deepstream-5.1_5.1.0-1_arm64.deb
178aa1d561bf220609b653515e727300837ae8f2 OpenCV-4.1.1-2-gd5a58aa75-aarch64-samples.deb
5409e9cd36b7106429def29e1de703e26bff06ff tensorrt_7.1.3.0-1+cuda10.2_arm64.deb
edf59625503fb7e6a6e6adce9dea18e83df6c494 python-libnvinfer-dev_7.1.3-1+cuda10.2_arm64.deb
0d02f2bee31e4f1bdb8e7c8e8d5ed6101be0eac9 libcudnn8-dev_8.0.0.180-1+cuda10.2_arm64.deb
602a8943ef08313a3fc91e6d7a6321f7a4cc07fd libnvinfer-plugin7_7.1.3-1+cuda10.2_arm64.deb
86063ba9f7701da7ae338ded61e46b66e4c7c7ed uff-converter-tf_7.1.3-1+cuda10.2_arm64.deb
c662cb6709f2c3ae922e5369e266dfbe25ebba4f libvisionworks-tracking-repo_0.88.2.501_arm64.deb
2856d32a53d28fb013382aaf9afcca3836ca3c4b libnvinfer-bin_7.1.3-1+cuda10.2_arm64.deb
a5271fca5fabb062f9c14a33b50de3e5bc88ea15 sdkml3_jetpack_l4t_451_deepstream.json
c59e44678b4407841d58d47f23f9d6a66eca9769 nvidia-l4t-jetson-multimedia-api_32.5.1-20210219084708_arm64.deb
bd733f27add7bdae96f841460fd55c3655da7750 cuda-repo-ubuntu1804-10-2-local-10.2.89-440.40_1.0-1_amd64.deb
a2b0b0a614436a1a88f28ed5c5e5f11909049f17 libvisionworks-repo_1.6.0.501_amd64.deb
9523b1ea08496273e4883fe611d410f6f1e09e5b python3-libnvinfer-dev_7.1.3-1+cuda10.2_arm64.deb
de75a0d2585baab47f4750ecf25e731bc6e24eed libnvonnxparsers-dev_7.1.3-1+cuda10.2_arm64.deb
ab1de5da2f5db42f805d142c0ac9e4ea9c9977de python-libnvinfer_7.1.3-1+cuda10.2_arm64.deb
a562d57242fe2da5a611fbd6cd5e4728428efd10 vpi-samples-1.0.15-cuda10-x86_64-linux.deb
6b48447f238bab123923d73a0e5bcf88dcba6574 OpenCV-4.1.1-2-gd5a58aa75-aarch64-licenses.deb
a6dea092044a0d6bf129ef8e689b4d42a454178e libnvidia-container-tools_0.9.0_beta.1_arm64.deb
0f5a9fca813d93479a1b453e980f74880461c343 cuda-repo-l4t-10-2-local-10.2.89_1.0-1_arm64.deb
3d8901e443f20080ce19f458bf1d76a650d271bb vpi-samples-1.0.15-aarch64-l4t.deb
d9e0fa1b60e5a91744e7b8fd2088a42b52f4f036 Tegra_Linux_Sample-Root-Filesystem_R32.5.1_aarch64.tbz2
90cad5bd25bfad11258292331a8a37403e41d325 graphsurgeon-tf_7.1.3-1+cuda10.2_arm64.deb
7e48eff22dd30c1536123574dcfaff92a92fb7ed libnvidia-container0_0.9.0_beta.1_arm64.deb
4214aef525f3541fcbb6af671630058e1516dac9 NVIDIA_Nsight_Graphics_L4T_Public_2020.5.20329.deb
e82ef547b0e243c72f5d2cc7552f8f789ec821bc nvidia-container-csv-tensorrt_7.1.3.0-1+cuda10.2_arm64.deb
ce2223091316605eca773b40c48d9164a5a52ca2 nvidia-container-csv-cuda_10.2.89-1_arm64.deb
81c257b061567fe55262cf1ed70927e01740b03d nvidia-container-toolkit_1.0.1-1_arm64.deb
9922d867da9388932c15df438fb5286790f84108 libnvinfer7_7.1.3-1+cuda10.2_arm64.deb
90e75ff326e48548fe88cbd9c335e700fe706510 vpi-lib-1.0.15-cuda10-x86_64-linux.deb
b2a225aa95015ab61a3f134886f85f165c2f5d55 OpenCV-4.1.1-2-gd5a58aa75-aarch64-python.deb
bf4849ef7b0509adb417a6f26d4762307a130967 libvisionworks-repo_1.6.0.501_arm64.deb
8eab8a8e79c6d724df876ebabcaa7dc8c52fe508 vpi-demos-1.0.15-cuda10-x86_64-linux-ubuntu1804.deb

And a quick comparison shows that both of the following files share an identical sha1 checksum (computed on my host) with the list of sha1 checksums published with the release.

d9e0fa1b60e5a91744e7b8fd2088a42b52f4f036 Tegra_Linux_Sample-Root-Filesystem_R32.5.1_aarch64.tbz2
9d95b2a1e647d71b32257e90990f2a582bd9e0ec Jetson_Linux_R32.5.1_aarch64.tbz2

Okay diving into a comparison of the official L4T Driver Package (BSP) - Tegra186_Linux_R32.5.1_aarch64.tbz2 to my pre-existing Linux_for_Tegra directory (excluding rootfs for now since this really isn’t in the BSP) (for me at /home/hogank/nvidia/nvidia_sdk/JetPack_4.5.1_Linux_JETSON_AGX_XAVIER/Linux_for_Tegra) I found that all the 436 files from Tegra186_Linux_R32.5.1_aarch64.tbz2 were in my Linux_for_Tegra directory. 4 files common to both areas came back with different checksums (possibly modified during the sdkm flash process?) and 46 new files (beyond the 436 files common to both areas) only existed in my Linux_for_Tegra directory (presumably were files generated by the sdkm during flashing). I’ve listed the breakdown below and will move on to the rootfs next.

4 files common to Tegra186_Linux_R32.5.1_aarch64.tbz2 and Linux_for_Tegra with different checksums - presumably modified by the sdkm during flashing

  • /bootloader/adsp-fw.bin
  • /bootloader/eks.img
  • /bootloader/nvtboot_applet_t194.bin
  • /bootloader/spe_t194.bin

46 files found only under my pre-existing Linux_for_Tegra directory - presumably generated by the sdkm during flashing

  • /bootloader/__pycache__/tegraflash_internal.cpython-36.pyc
  • /bootloader/badpage.bin
  • /bootloader/boot.img
  • /bootloader/
  • /bootloader/cvm.bin
  • /bootloader/emmc_bootblob_ver.txt
  • /bootloader/flash_parameters.txt
  • /bootloader/flash_win.bat
  • /bootloader/flash.xml
  • /bootloader/
  • /bootloader/flashcmd.txt
  • /bootloader/initrd
  • /bootloader/kernel_bootctrl.bin
  • /bootloader/kernel_tegra194-p2888-0001-p2822-0000.dtb
  • /bootloader/
  • /bootloader/mb1_t194_prod_sigheader_encrypt.bin
  • /bootloader/mb1_t194_prod_sigheader.bin
  • /bootloader/mb1_t194_prod_sigheader.hash
  • /bootloader/recovery.img
  • /bootloader/recovery.ramdisk
  • /bootloader/system.img
  • /bootloader/tegra194-a02-bpmp-p2888-a04.dtb
  • /bootloader/tegra194-br-bct-sdmmc.cfg
  • /bootloader/tegra194-mb1-bct-gpioint-p2888-0000-p2822-0000.cfg
  • /bootloader/tegra194-mb1-bct-memcfg-p2888.cfg
  • /bootloader/tegra194-mb1-bct-misc-flash.cfg
  • /bootloader/tegra194-mb1-bct-misc-l4t.cfg
  • /bootloader/tegra194-mb1-bct-pmic-p2888-0001-a04-E-0-p2822-0000.cfg
  • /bootloader/tegra194-mb1-bct-ratchet-p2888-0000-p2822-0000.cfg
  • /bootloader/tegra194-mb1-bct-reset-p2888-0000-p2822-0000.cfg
  • /bootloader/tegra194-mb1-bct-scr-cbb-mini.cfg
  • /bootloader/tegra194-mb1-soft-fuses-l4t.cfg
  • /bootloader/tegra194-mb1-uphy-lane-p2888-0000-p2822-0000.cfg
  • /bootloader/tegra194-memcfg-sw-override.cfg
  • /bootloader/tegra194-p2888-0001-p2822-0000.dtb
  • /bootloader/tegra194-p2888-0001-p2822-0000.dtb.rec
  • /bootloader/tegra19x-mb1-bct-device-sdmmc.cfg
  • /bootloader/tegra19x-mb1-padvoltage-p2888-0000-a00-p2822-0000-a00.cfg
  • /bootloader/tegra19x-mb1-pinmux-p2888-0000-a04-p2822-0000-b01.cfg
  • /bootloader/tegra19x-mb1-prod-p2888-0000-p2822-0000.cfg
  • /bootloader/temp_user_dir/boot_sigheader.img.encrypt
  • /bootloader/temp_user_dir/boot.img
  • /bootloader/temp_user_dir/kernel_tegra194-p2888-0001-p2822-0000_sigheader.dtb.encrypt
  • /bootloader/temp_user_dir/kernel_tegra194-p2888-0001-p2822-0000.dtb
  • /bootloader/temp_user_dir/kernel_tegra194-p2888-0001-p2822-0000.dtb.sig
  • /kernel/dtb/tegra194-p2888-0001-p2822-0000.dtb.rec

Actually I’m going to skip the rootfs for now as the rootfs under Linux_for_Tegra is populated with quite a bit of new stuff not found in the original Tegra_Linux_Sample-Root-Filesystem_R32.5.1_aarch64.tbz2 (presumably is filled by the sdkm in preparation for flashing the Jetson). I think its safe to assume the rootfs is good since both the md5 and sha1 checksums from the file the sdkm originally downloaded match both the fresh download of the rootfs I did today as well as the sha1 checksum published with the release (from NVIDIA themselves).

Alright I just compared the md5 and sha1 checksums for the files (image, initrd, dtb, etc) in Linux_for_Tegra/rootfs/boot (on my host) to the files on my USB drive (couldn’t compare to the files on my NVMe or the built-in eMMC because of the OTA kernel (and other stuff) update I did the other day) and we have an exact checksum match on all files (dtb is in slightly different places on the host vs the USB but they also match). This is leading me to believe that there wasn’t any issues with file corruption while flashing the Jetson from the host. I’m looking at the backups of the kernel and kernel_b partitions from the built-in eMMC (which are identical and do match each other but don’t match /boot/Image (again on the built in eMMC)… /boot/Image is possibly encrypted?.. but the head of /boot/Image.sig nearly matches the head of the kernel/kernel_b partitions) and trying to reverse engineer the 3k+ line script to see what connections I can make.


I don’t think you need to compare the checksum. The only things we need to know:

  1. Have you tried the pure jetpack release again? Please remember we want a clean setup.

  2. Our team member notices one step mentioned by you as below. We are not sure if this step corrupted your emmc fs because you didn’t share the full log before doing this step. Thus, try (1) again to see whether pure jetpack has this issue or not.

Then finally today I updated the kernel, bootloader, and so on through the software update utility and then cloned the boot directory from the NVMe to the eMMC (altering it from its original sdkmanager flashed form)

There is no need to dig into what is inside partition or what is inside rootfs now. We only care about will you hit this problem with pure jetpack. Thus, removing your old driver package, let sdkm downloads a new one and then use the pure command to flash xavier with mmcblk0p1.

I will attempt a fresh flash but I doubt this will have any effect. Calculating all the checksums already shows that my current jetpack install is identical to the fresh download and in addition all the files Jetpack created to flash my Jetson are identical to the files that were on my Jetson. So I hope you can see where I’m going with this… there doesn’t appear to be any logic to redownloading files that are already proven to be identical to the release files, and it doesn’t make sense that reflashing would change the outcome since the Jetson and the Jetpack’s “prep” type folders and files already match. I should also state that I have never touched/modified any of the files in the Jetpack directories (so no device tree changes or custom kernels or anything) which is also evident from the fact that all the file checksums match the fresh release. In any case I’ll give it a go and report the results


Yes, we understand this may not work. We sometimes ask user to try something you may think stupid or worthless, but back to the base case is kind of SOP here.
Please note that we have QA test and many other users’ error report since rel-32.5. Not yet received any similar case as yours. Thus, I want to make sure if the base case really corrupts on your side. You’ve done too many modification before we found out even emmc is corrupted. Thus, my suggestion is just do the clean setup again and see the result.

That makes sense, I’m curious though if anyone would have even noticed this yet. If anyone had moved their rootfs (with boot) to another storage medium and flashed their Jetson to target that medium for what they thought was boot and rootfs then it seems this would go unnoticed even potentially after an OTA update (because of how the kernel partition magically got updated). They would see their rootfs being mounted from their target storage medium and wouldn’t give a second thought about where the kernel was coming from since there are no noticeable system issues or errors beyond those seen only by closely paying attention to the UART logs

If emmc is okay after your retry, we will move to check the usb case.

Okay so the impossible happened. Uninstalling the SDK Manager, deleting its downloads and it’s entire nvidia directory with Linux_for_Tegra, re-downloading the SDK Manager and installing, and then letting it download everything again and flashing from the SDK Manager’s desktop UI seems to have magically fixed the boot issues (mostly). So the solution is the definition of insanity… doing the same thing over and over again and expecting a different result… go figure. Still no real answer as to why a successful flash without any reported errors would have produced a faulty kernel/system. The first UART log is from the Jetson’s very first reboot after the full (re)flash/install and the second UART log is from when I plugged in the USB drive and rebooted. The USB drive clearly has issues from the previous faulty install but at least its partition can now be read. I imagine rsyncing the rootfs from the built-in eMMC will make it bootable. So assuming that works there is one other issue. CBoot is supposed to check/boot NVMe before eMMC and I never removed the NVMe from the Jetson (in fact its always been there since day one and even now has a full rootfs with (most likely) faulty kernel… but still) so technically in the first UART log shouldn’t it have checked the NVMe and attempted to boot from it? Where is the NVMe at altogether? Isn’t it supposed to be possible to “boot from” and 'mount rootfs" from the NVMe and not have to do a hybrid with the kernel living on the eMMC?

uartlog_2021032301.txt (28.9 KB)
uartlog_2021032302.txt (29.9 KB)

I would say it is possible that you just did too much configuration altogether so error happened.

Maybe you should try to check thing one by one instead of doing them altogether and come back to ask why something is broken. In such case, I can only tell you to try fresh setup again. IMO, it is very common here. Users claim something which we think not possible, and turns out what they said cannot be reproduced. So really no need to dig into the reason for such case, unless you find a method that can definitely reproduce issue.

As for current usb issue, where does this usb drive come from? Did you format the usb drive and follow the steps as How to Boot from USB Drive? - #7 by carolyuu?