How to Boot from USB Drive?

Hey @WayneWWW this really makes a lot of sense with the distinction of “mount rootfs” and “boots from” and your summary of the process. I can see now that although its really nice having the rootfs on a bigger disk, updating the kernel down the road could introduce some issues. I’ve attached the boot log from when I had my USB drive attached directly to the Jetson with an exact mirror of my NVMe drive (only difference on the USB drive is in /boot/extlinux/extlinux.conf where root=/dev/sda1 instead of root=/dev/nvme0n1p1 or root=/dev/mmcblk0p1). The NVMe drive ultimately had its rootfs mounted instead of the USB drive and I’m assuming the kernel on the eMMC was booted. Thanks for taking a look into this. I would definitely rather have the kernel and rootfs on the same partition (“boots from” + “mount rootfs” team-up).

bootlog.txt (906.8 KB)

1 Like

Hi,

Sorry for misleading. The log I need is the uart log which includes the cboot log.

Totally my bad, grabbed the wrong log. Attached should be the one you’re after. Thanks again for looking into this @WayneWWW

uartlog.txt (98.4 KB)

Was reading through a thread over here and made me think… based on my review of the uart logs and your explanation I believe the /boot/extlinux/extlinux.conf file is being read from the Jetson’s built in eMMC (and not from the USB or NVMe drive like I want). That being said, what would happen if the file was changed so that linux and initrd both pointed to the NVMe like below? Would this work or is the /dev/nvme0n1p1 not accessible at this point? Also wondering now if this is what the /boot/extlinux/extlinux.conf file on the NVMe is also supposed to look like in order to properly locate the kernel and ramdisk?

LABEL primary
MENU LABEL primary kernel
LINUX /dev/nvme0n1p1/boot/Image
INITRD /dev/nvme0n1p1/boot/initrd
APPEND ${cbootargs} quiet root=/dev/nvme0n1p1 rw rootwait rootfstype=ext4 console=ttyTCU0,115200n8 console=tty0 fbcon=map:0 net.ifnames=0 rootfstype=ext4

Just tested this using the USB drive and it doesn’t have any effect. Still fails the USB section with “Cannot open partition kernel” and moves on to “boot from” the Jetson’s built-in eMMC (kernel) and mounts the root file system (rootfs) from the NVMe.

Just successfully updated the kernel, bootloader, device tree, and a whole bunch of other stuff following the advice in the response below. Simply hit “Reboot later” after all the updates installed, then copied the /boot on the NVMe to the /boot on the Jetson’s built-in eMMC, and then rebooted the Jetson. Obviously this is not the most ideal path and I’m still looking forward to us figuring out the boot issues so that the /boot folder (kernel, ramdisk, device tree, and so on) that’s actually booted from can live on the NVMe as well.

Okay now things are getting really interesting. I just updated the kernel (and other stuff) and then mirrored to the built-in eMMC as stated in my previous message. So basically the eMMC and the NVMe are both totally updated. Now I decided to stick in the USB drive (old/non-updated kernel and old/non-updated everything else) and reboot the Jetson to see if any of the new updates had fixed the USB problems (unlikely I guess since CBoot lives elsewhere and I think is the issue?). So the boot process starts and the USB section once again shows the standard “Cannot open partition kernel” but to my surprise as the boot continues the system continuously crashes with “Kernel panic - not syncing: Attempted to kill init! exitcode=0x00007f00”. If I unplug the USB drive while CBoot is counting down to autoboot then the system boots up as normal (“boots from” eMMC and “mounts rootfs” from NVMe). I attached the UART log from when I allowed it to panic once and then pulled the USB and let it boot successfully. I would expect this if I was trying to mount the rootfs from the USB and the kernel didn’t match the built-in eMMC… which I’m not even doing (I’m mounting the rootfs from the NVMe). I’m even more confused as to why this is happening since the USB section appears to fail as if the USB is skipped. So why does a USB (that supposedly has no partition kernel) cause a kernel panic???

uartlog_2021032101.txt (122.7 KB)

Well… I just checked your uartlog.txt posted yesterday from #15.

This is the first time you posting a full log here, so I didn’t notice this before.

Even when the cboot tries to read the extlinux.conf from your emmc, there is a same error as usb boot case. If such case happens, our cboot will initiate a fallback mechanism to read kernel from partition. That is, during the flash process (by flash.sh), our tool not only installs the /boot to your rootfs but also flash a backup kernel into specific partition. When boots up fails, it will use the kernel partition to boot instead.

[0005.637] I> ########## Fixed storage boot ##########
[0005.642] I> Already published: 00010003
[0005.646] I> Look for boot partition
[0005.649] I> Fallback: assuming 0th partition is boot partition
[0005.655] I> Detect filesystem
[0005.670] I> ext4_mount:588: Failed to allocate memory for group descriptor
[0005.671] E> Failed to mount file system!!
[0005.671] E> Invalid fm_handle (0xa06964a8) or mount path passed
[0005.675] I> Fallback: Load binaries from partition
[0005.679] W> No valid slot number is found in scratch register
[0005.685] W> Return default slot: _a
[0005.689] I> A/B: bin_type (37) slot 0
[0005.692] I> Loading kernel from partition

I think that explains why your kernel update method does not work when you try to update it in built-in emmc… because the boot process does not read it at all.

Even in your latest fine boot result from nvme drive. The kernel is still from the partition.

[0005.637] I> ########## Fixed storage boot ##########
[0005.642] I> Already published: 00010003
[0005.646] I> Look for boot partition
[0005.649] I> Fallback: assuming 0th partition is boot partition
[0005.655] I> Detect filesystem
[0005.670] I> ext4_mount:588: Failed to allocate memory for group descriptor
[0005.671] E> Failed to mount file system!!
[0005.671] E> Invalid fm_handle (0xa06964a8) or mount path passed
[0005.675] I> Fallback: Load binaries from partition
[0005.679] W> No valid slot number is found in scratch register
[0005.685] W> Return default slot: _a

Thus, I think we should firstly check why even the emmc boot fails from the beginning… Is it a pure image from sdkmanager?

Yep everything on the eMMC drive was direct from the sdkmanager.

  • While I was getting the host set up I had booted up the Jetson just to poke around (so at that point it had the L4T that came pre-installed)
  • Then I flashed the Jetson from the host using the desktop UI and ran the Jetson like that for a bit while contemplating storage issues and “modern” software spreading all over the file system
  • Then I cloned the eMMC APP partition to several other drives (UFS cards, USB drives, and NVMe) while testing booting (but never altered the eMMC)
  • Then yesterday I used the flash utility from the host to set the rootfs to the NVMe (which as a byproduct wiped my eMMC except for the boot folder and even that was slimmed down as compared to my NVMe clone of the original /boot folder) and I didn’t mention this before but I noticed after the flash (that switched the rootfs target to the NVMe) that my /boot/Image and /boot/initrd were no longer the same from the eMMC to the NVMe (although similar in size diff revealed that they were not identical anymore)
  • Then finally today I updated the kernel, bootloader, and so on through the software update utility and then cloned the boot directory from the NVMe to the eMMC (altering it from its original sdkmanager flashed form)

Wow, okay so my brain is still processing all this. I don’t have the Jetson up right now so I’m just talking out loud. Is it possible to convert the working kernel partition into an image that I can replace /boot/Image with on my eMMC (and NVMe for that matter)? How is the backup kernel partition working when seconds before in the flash process it would have dumped the same thing to the APP partition (although apparently corrupted?)? How is the backup kernel partition working after I updated the kernel in the OS? Wouldn’t there be a mismatch in versions or did this backup kernel partition on the non-rootfs drive (eMMC) somehow get updated by the OTA updates? If the backup kernel partition gets modified by the OTA updates then why doesn’t the /boot folder on the eMMC also get updated (negating the need for me to copy the updates from the NVMe /boot before rebooting… although I guess if the kernel on the eMMC APP partition was never read from then my copy operation was more or less ignored in this case… but still how did the kernel partition on the eMMC get updated with the OTA updates then… did it not get updated… it says it’s updated)? I’m quite confused as to the connotations of these UART findings

Also not sure what this means… my kernel was updated successfully without issue. Which is why I’m further confused as to how the OTA updates could have modified this backup kernel partition but left the eMMC APP partition alone (although maybe this is because the system didn’t know about the APP partition kernel location but did know about the location of the working and booted backup kernel partition… and hence updated that)

IMO, firstly need to ask if formatting the emmc with sdkm is an option for you? I would like to deal with why even sdkm would cause file system error here. I don’t think you should dig into those kernel back up or something else when the system is already messed up.

As for your question, I think OTA would update both of them. But OTA would not expect cboot not able to read your file system.

I will check my rel-32.5 device first to make sure the default xavier really reads the extlinux.conf from emmc rootfs …

My xavier device log was still the old one (rel-32.4.4). Not yet checked the 32.5 log.

So didn’t the flash I performed yesterday (to switch the rootfs to the NVMe) technically reformat the entire eMMC and re-write all the partitions and everything in there? I’m pretty sure I saw that happening in the log on the host. And I had USB issues before that and continue to have them now after that. Could it be that the images/packages that the sdkm downloaded are themselves partially corrupted (still confused about the working backup kernel partition and corrupted /boot/Image if they came from the same place)? Does the sdkm do like a checksum on the original downloads?

So didn’t the flash I performed yesterday (to switch the rootfs to the NVMe) technically reformat the entire eMMC and re-write all the partitions and everything in there?

Yes, it should. Sorry about that. I am used to switching between topics here so may forget about what you’ve tried after reading too many topics from others…
So far, I guess the case is when “cboot” tries to read file system from any storage on your side, it has problem. However, when “kernel” tries to read same file system, it passes.

I have such guess because it sounds like all the file systems here for usb/nvme are cloned from emmc, right? So if emmc is corrupted from the beginning, it happens to usb too.

Please give me sometime to discuss with internal team. Engineers are from different timezone so need your kind patience.

1 Like

In the meanwhile waiting for their feedback, could you also try to remove the driver package installed by your sdkm and let it download/ flash again? I mean removing the BSP on host side and let sdkm do a clean download again.

or you can also directly download the BSP and rootfs and set it up manually from our dlc. (no sdkm required and no need to remove anything, it is just separate files)

The “quick start guide” on the page would also teach you the steps.

1 Like

Okay back at it. Connection is a bit spotty as I’m currently traveling but I’ll re-download everything as soon as I’m able. In the mean time I generated the md5 checksums for all files the sdkm already downloaded before. If anyone has time to run the same command and post your results I would really appreciate it.

For those wondering what this is all about, comparing these md5 checksums with md5 checksums from known working downloads will help me identify corrupted downloads on my end (with only the very rare exception of hash collisions and assuming other people do not also have corrupted downloads).

Command: need to change hogank and/or download location if different

find /home/hogank/Downloads/nvidia/sdkm_downloads/ -type f -exec md5sum “{}” + > /home/hogank/Desktop/sdkm_downloads_md5.txt

Results: located in /home/hogank/Desktop/sdkm_downloads_md5.txt (removed all the path prefixes below for brevity)

13fd4a30819dc1276ca4cbb1425ddd72 libnvonnxparsers7_7.1.3-1+cuda10.2_arm64.deb
a75c6180271afbffb786229a93f7cefe libnvinfer-doc_7.1.3-1+cuda10.2_all.deb
a151651f2aa625622b96909d9ced3029 nsight-systems-cli-2020.5.3_2020.5.3.17-1_arm64.deb
608a3b21efc81574abd41bfee8947ba9 libnvinfer-dev_7.1.3-1+cuda10.2_arm64.deb
8ad8bf9d497a753fd38c34bed375b833 libnvparsers7_7.1.3-1+cuda10.2_arm64.deb
ad5a4d2513ec0ed2f743afb948adb9b0 OpenCV-4.1.1-2-gd5a58aa75-aarch64-dev.deb
4c0b65a88ce83049d8811bae4fafddbe libvisionworks-sfm-repo_0.90.4.501_arm64.deb
cef544ada8bb548e9960987e930cb6fe nvidia-docker2_2.2.0-1_all.deb
5f72ea29de7fc6447a8d318e422db46a nvidia-container-runtime_3.1.0-1_arm64.deb
afa252ef5d96ab58e6c3672c6b5f90ed nvidia-container-csv-visionworks_1.6.0.501_arm64.deb
d210e7990c507f4d3ef1f2762e936e00 vpi-cross-aarch64-l4t-1.0.15-x86_64-linux.deb
68efa7782a6b0f927424b00ba848704b libcudnn8-doc_8.0.0.180-1+cuda10.2_arm64.deb
beea9cf0580265cf85b41b61e202d023 vpi-dev-1.0.15-cuda10-x86_64-linux.deb
af1138c048f628766f7d7910fe3252bd Jetson_Linux_R32.5.1_aarch64.tbz2
c593f85f62ffd61120d6fbc34006f219 libnvparsers-dev_7.1.3-1+cuda10.2_arm64.deb
478fe3b81baf2a36a5c62b142e47aa52 libvisionworks-sfm-repo_0.90.4.501_amd64.deb
e19ba074d111c0af9f0bf7f4fcdfbfa7 OpenCV-4.1.1-2-gd5a58aa75-aarch64-libs.deb
adfa4a7c800eebe32fd19d42db2ee631 libnvinfer-samples_7.1.3-1+cuda10.2_all.deb
c512ce9407a86ec484ad0f0b81c79e42 python3-libnvinfer_7.1.3-1+cuda10.2_arm64.deb
94dd40f69c070c60c9afe172823ae3d7 vpi-dev-1.0.15-aarch64-l4t.deb
437971a5798e78a22d4a4c10ba3b1502 cuda-repo-cross-aarch64-10-2-local-10.2.89_1.0-1_all.deb
046ab7b58b5d8ba61090a1566d2b0068 NsightSystems-linux-public-2020.5.3.17-0256620.deb
30825790ca61fc8ae67e238c71ba31a6 libvisionworks-tracking-repo_0.88.2.501_amd64.deb
774350dc066a328a7f47d222385ef3a9 libcudnn8_8.0.0.180-1+cuda10.2_arm64.deb
9b76a8f2c4d53fa7cad48bc7cb4f2257 nvidia-container-csv-cudnn_8.0.0.180-1+cuda10.2_arm64.deb
d41d8cd98f00b204e9800998ecf8427e cuda-repo-ubuntu1804-10-2-local-10.2.89-440.40_1.0-1_amd64.deb.mtd
6804d700f424a55073cd8d8b39d5ca19 vpi-lib-1.0.15-aarch64-l4t.deb
073c73662cd7ae907284b1d9ba67a838 vpi-demos-1.0.15-aarch64-l4t.deb
e7e779c024f2d2d956378a67ebfea8b4 sdkml3_jetpack_l4t_451.json
0716042b14e58a76d0426fa6fafd556a libnvinfer-plugin-dev_7.1.3-1+cuda10.2_arm64.deb
7d61fb25722287bcece496008b3a9e8b deepstream-5.1_5.1.0-1_arm64.deb
5969fc1376ac31cbadfd42276e271ffd OpenCV-4.1.1-2-gd5a58aa75-aarch64-samples.deb
bc75cded1a32d5871f0e18a7dfa4a0c0 tensorrt_7.1.3.0-1+cuda10.2_arm64.deb
bebb557cfa3d62d99a6f3615230f79dd python-libnvinfer-dev_7.1.3-1+cuda10.2_arm64.deb
291e2efb3286eafd24540dbcb17ebff6 libcudnn8-dev_8.0.0.180-1+cuda10.2_arm64.deb
7bd05af24165d06e0463bc09ae75c28a libnvinfer-plugin7_7.1.3-1+cuda10.2_arm64.deb
4d956d853d09c2f06e4bcbf7a7505318 uff-converter-tf_7.1.3-1+cuda10.2_arm64.deb
6c25179690472476d6b8ac94578a6289 libvisionworks-tracking-repo_0.88.2.501_arm64.deb
e6eee2ab7d1a73b0567516bca1cafd72 libnvinfer-bin_7.1.3-1+cuda10.2_arm64.deb
bd5924c9624e029348313db155e0e395 sdkml3_jetpack_l4t_451_deepstream.json
259a2c8dc34ba4efe39217379bd02e0a nvidia-l4t-jetson-multimedia-api_32.5.1-20210219084708_arm64.deb
286beb1a09da17efcef278930384f86b cuda-repo-ubuntu1804-10-2-local-10.2.89-440.40_1.0-1_amd64.deb
056ea0183c8a779ed097f37d2b46fb4f libvisionworks-repo_1.6.0.501_amd64.deb
809212ce67fa634ceb10731329370862 python3-libnvinfer-dev_7.1.3-1+cuda10.2_arm64.deb
9cbfb8d8c7c4d678532a4f811de13fe6 libnvonnxparsers-dev_7.1.3-1+cuda10.2_arm64.deb
8a0fdf5ea469231f2f623626054eb4e3 python-libnvinfer_7.1.3-1+cuda10.2_arm64.deb
61c91707eb80f334614e5da624738ea0 vpi-samples-1.0.15-cuda10-x86_64-linux.deb
9db88f13e55751329d16d6d6ec0ae4ac OpenCV-4.1.1-2-gd5a58aa75-aarch64-licenses.deb
f7a3983260a0f22d0b3fd84829952794 libnvidia-container-tools_0.9.0_beta.1_arm64.deb
d6f6b1b3ad06965caca5ca792c43aa43 cuda-repo-l4t-10-2-local-10.2.89_1.0-1_arm64.deb
c20e09428744046b3fad6534a170e3df vpi-samples-1.0.15-aarch64-l4t.deb
33805bde17712e09f447c5e059dcb082 Tegra_Linux_Sample-Root-Filesystem_R32.5.1_aarch64.tbz2
dadb934c844aaaab1f675420c3467b00 graphsurgeon-tf_7.1.3-1+cuda10.2_arm64.deb
c343f119238659c724732da212163e83 libnvidia-container0_0.9.0_beta.1_arm64.deb
fe8197f368f7edf37cd3f35b75a92584 NVIDIA_VisionWorks_References.zip
ceb03d4e65e8cb8dc591238c1307f074 NVIDIA_Nsight_Graphics_L4T_Public_2020.5.20329.deb
be45a916e63dbe5be853db2c5c57cf62 nvidia-container-csv-tensorrt_7.1.3.0-1+cuda10.2_arm64.deb
eb3cb607f89f319a1125ca09ca2b8141 nvidia-container-csv-cuda_10.2.89-1_arm64.deb
48cc6c5935636fafb6b2f1078f721cbc nvidia-container-toolkit_1.0.1-1_arm64.deb
299622cc9331a15a8ac65beb7ec4579e libnvinfer7_7.1.3-1+cuda10.2_arm64.deb
a18df12c5576b0a53f8242ee3a1e5900 vpi-lib-1.0.15-cuda10-x86_64-linux.deb
d622b7d729b62ecee44c9e7d289918cb OpenCV-4.1.1-2-gd5a58aa75-aarch64-python.deb
6232660acb8305fc6e52125067fc0008 libvisionworks-repo_1.6.0.501_arm64.deb
f409dc07b205d291cc86678c826e5156 vpi-demos-1.0.15-cuda10-x86_64-linux-ubuntu1804.deb

Found the sha1 checksums for the latest release (undocument but mirrors previous releases listed in the Jetson Download Center) at this url. If anyone prefers sha1 for any reason I’ve updated the command and results below. The sha1 checksums listed for the release only contain a subset of the files the sdkm downloaded so I can only compare a few. I would still appreciate it if someone would run this on their end and post their results.

Command: need to change hogank and/or download location if different

find /home/hogank/Downloads/nvidia/sdkm_downloads/ -type f -exec sha1sum “{}” + > /home/hogank/Desktop/sdkm_downloads_sha1.txt

Results: located in /home/hogank/Desktop/sdkm_downloads_sha1.txt (again removed all the path prefixes below for brevity)

ad2b3a674302ffeccb1377f5914bbef8c13a76c2 libnvonnxparsers7_7.1.3-1+cuda10.2_arm64.deb
0f4b41b8b583c1060f784c759de39b2649777dc3 libnvinfer-doc_7.1.3-1+cuda10.2_all.deb
fbe2fbd84b97ce5d28b0c9a047237a26d2a9c1d3 nsight-systems-cli-2020.5.3_2020.5.3.17-1_arm64.deb
104386e74d332ed2768087d5f7f086b3a80b8e01 libnvinfer-dev_7.1.3-1+cuda10.2_arm64.deb
12f33966cb652c0333cd123d626833b8691450e1 libnvparsers7_7.1.3-1+cuda10.2_arm64.deb
a76514250c6e700f259f571cdf88bc8a6ab91fbb OpenCV-4.1.1-2-gd5a58aa75-aarch64-dev.deb
7638c1ea2130c315b73870816b42f99e50ebe064 libvisionworks-sfm-repo_0.90.4.501_arm64.deb
83eea5bd7fabda59305c8f0951dee57151197023 nvidia-docker2_2.2.0-1_all.deb
73826f2c150dfacc12f6393c783f71e7750d8a55 nvidia-container-runtime_3.1.0-1_arm64.deb
da958d4c490e363e8bf62b2d315e53b3997507b3 nvidia-container-csv-visionworks_1.6.0.501_arm64.deb
7c780667d924e02acd1a85cd199818b4516e987b vpi-cross-aarch64-l4t-1.0.15-x86_64-linux.deb
a6efd123c91cc42f77343d61acbcbf423c2d4926 libcudnn8-doc_8.0.0.180-1+cuda10.2_arm64.deb
c101f9001af7b18a33a4a0c18082bb1a53275ef6 vpi-dev-1.0.15-cuda10-x86_64-linux.deb
9d95b2a1e647d71b32257e90990f2a582bd9e0ec Jetson_Linux_R32.5.1_aarch64.tbz2
f4e87e71d04e639e5b21ee20dfe2ce9ea8d4e92a libnvparsers-dev_7.1.3-1+cuda10.2_arm64.deb
e91175eb0e3d122060a7a3e335b132dda5288360 libvisionworks-sfm-repo_0.90.4.501_amd64.deb
4038573782e90a1c5e0d766257d3d69b915a60bb OpenCV-4.1.1-2-gd5a58aa75-aarch64-libs.deb
01d196d195839e8873ef4154e5cd0169b572ef5f libnvinfer-samples_7.1.3-1+cuda10.2_all.deb
7b6877c8b48670d3a7296508b2b87b56d775cc5a python3-libnvinfer_7.1.3-1+cuda10.2_arm64.deb
9b31ac4299580bddf67fa439ce57e1639a3b58c6 vpi-dev-1.0.15-aarch64-l4t.deb
a50b3ddd5c907ed685df0fb81ce84788ccfc6436 cuda-repo-cross-aarch64-10-2-local-10.2.89_1.0-1_all.deb
fde6965b6f079e27e9ac8f0f0eda6bbc9e76ce4d NsightSystems-linux-public-2020.5.3.17-0256620.deb
14c659d05d5cbf814161a74285878774e3ab2645 libvisionworks-tracking-repo_0.88.2.501_amd64.deb
d984785d62fcebb54bb131e8eecd35092d3fc6c4 libcudnn8_8.0.0.180-1+cuda10.2_arm64.deb
1a91a2db048da9e9ee8d87e24a12aba815c75391 nvidia-container-csv-cudnn_8.0.0.180-1+cuda10.2_arm64.deb
da39a3ee5e6b4b0d3255bfef95601890afd80709 cuda-repo-ubuntu1804-10-2-local-10.2.89-440.40_1.0-1_amd64.deb.mtd
e9e1987f32b8bed269c4b3c805f7683d150f0ef4 vpi-lib-1.0.15-aarch64-l4t.deb
e43cf8dfc8739bf38b9fdaecb8086f9a21acbd2a vpi-demos-1.0.15-aarch64-l4t.deb
8264f8c945195e9ab211ebf6fa71e326bfef80f6 sdkml3_jetpack_l4t_451.json
570a91ec21c073227951cd0083542dd083a5771a libnvinfer-plugin-dev_7.1.3-1+cuda10.2_arm64.deb
91dca9f0935c3c2eb142104cd09002c5103739e9 deepstream-5.1_5.1.0-1_arm64.deb
178aa1d561bf220609b653515e727300837ae8f2 OpenCV-4.1.1-2-gd5a58aa75-aarch64-samples.deb
5409e9cd36b7106429def29e1de703e26bff06ff tensorrt_7.1.3.0-1+cuda10.2_arm64.deb
edf59625503fb7e6a6e6adce9dea18e83df6c494 python-libnvinfer-dev_7.1.3-1+cuda10.2_arm64.deb
0d02f2bee31e4f1bdb8e7c8e8d5ed6101be0eac9 libcudnn8-dev_8.0.0.180-1+cuda10.2_arm64.deb
602a8943ef08313a3fc91e6d7a6321f7a4cc07fd libnvinfer-plugin7_7.1.3-1+cuda10.2_arm64.deb
86063ba9f7701da7ae338ded61e46b66e4c7c7ed uff-converter-tf_7.1.3-1+cuda10.2_arm64.deb
c662cb6709f2c3ae922e5369e266dfbe25ebba4f libvisionworks-tracking-repo_0.88.2.501_arm64.deb
2856d32a53d28fb013382aaf9afcca3836ca3c4b libnvinfer-bin_7.1.3-1+cuda10.2_arm64.deb
a5271fca5fabb062f9c14a33b50de3e5bc88ea15 sdkml3_jetpack_l4t_451_deepstream.json
c59e44678b4407841d58d47f23f9d6a66eca9769 nvidia-l4t-jetson-multimedia-api_32.5.1-20210219084708_arm64.deb
bd733f27add7bdae96f841460fd55c3655da7750 cuda-repo-ubuntu1804-10-2-local-10.2.89-440.40_1.0-1_amd64.deb
a2b0b0a614436a1a88f28ed5c5e5f11909049f17 libvisionworks-repo_1.6.0.501_amd64.deb
9523b1ea08496273e4883fe611d410f6f1e09e5b python3-libnvinfer-dev_7.1.3-1+cuda10.2_arm64.deb
de75a0d2585baab47f4750ecf25e731bc6e24eed libnvonnxparsers-dev_7.1.3-1+cuda10.2_arm64.deb
ab1de5da2f5db42f805d142c0ac9e4ea9c9977de python-libnvinfer_7.1.3-1+cuda10.2_arm64.deb
a562d57242fe2da5a611fbd6cd5e4728428efd10 vpi-samples-1.0.15-cuda10-x86_64-linux.deb
6b48447f238bab123923d73a0e5bcf88dcba6574 OpenCV-4.1.1-2-gd5a58aa75-aarch64-licenses.deb
a6dea092044a0d6bf129ef8e689b4d42a454178e libnvidia-container-tools_0.9.0_beta.1_arm64.deb
0f5a9fca813d93479a1b453e980f74880461c343 cuda-repo-l4t-10-2-local-10.2.89_1.0-1_arm64.deb
3d8901e443f20080ce19f458bf1d76a650d271bb vpi-samples-1.0.15-aarch64-l4t.deb
d9e0fa1b60e5a91744e7b8fd2088a42b52f4f036 Tegra_Linux_Sample-Root-Filesystem_R32.5.1_aarch64.tbz2
90cad5bd25bfad11258292331a8a37403e41d325 graphsurgeon-tf_7.1.3-1+cuda10.2_arm64.deb
7e48eff22dd30c1536123574dcfaff92a92fb7ed libnvidia-container0_0.9.0_beta.1_arm64.deb
d214239689875c5fe03fd8b73d37516711f4058e NVIDIA_VisionWorks_References.zip
4214aef525f3541fcbb6af671630058e1516dac9 NVIDIA_Nsight_Graphics_L4T_Public_2020.5.20329.deb
e82ef547b0e243c72f5d2cc7552f8f789ec821bc nvidia-container-csv-tensorrt_7.1.3.0-1+cuda10.2_arm64.deb
ce2223091316605eca773b40c48d9164a5a52ca2 nvidia-container-csv-cuda_10.2.89-1_arm64.deb
81c257b061567fe55262cf1ed70927e01740b03d nvidia-container-toolkit_1.0.1-1_arm64.deb
9922d867da9388932c15df438fb5286790f84108 libnvinfer7_7.1.3-1+cuda10.2_arm64.deb
90e75ff326e48548fe88cbd9c335e700fe706510 vpi-lib-1.0.15-cuda10-x86_64-linux.deb
b2a225aa95015ab61a3f134886f85f165c2f5d55 OpenCV-4.1.1-2-gd5a58aa75-aarch64-python.deb
bf4849ef7b0509adb417a6f26d4762307a130967 libvisionworks-repo_1.6.0.501_arm64.deb
8eab8a8e79c6d724df876ebabcaa7dc8c52fe508 vpi-demos-1.0.15-cuda10-x86_64-linux-ubuntu1804.deb

And a quick comparison shows that both of the following files share an identical sha1 checksum (computed on my host) with the list of sha1 checksums published with the release.

d9e0fa1b60e5a91744e7b8fd2088a42b52f4f036 Tegra_Linux_Sample-Root-Filesystem_R32.5.1_aarch64.tbz2
9d95b2a1e647d71b32257e90990f2a582bd9e0ec Jetson_Linux_R32.5.1_aarch64.tbz2

Okay diving into a comparison of the official L4T Driver Package (BSP) - Tegra186_Linux_R32.5.1_aarch64.tbz2 to my pre-existing Linux_for_Tegra directory (excluding rootfs for now since this really isn’t in the BSP) (for me at /home/hogank/nvidia/nvidia_sdk/JetPack_4.5.1_Linux_JETSON_AGX_XAVIER/Linux_for_Tegra) I found that all the 436 files from Tegra186_Linux_R32.5.1_aarch64.tbz2 were in my Linux_for_Tegra directory. 4 files common to both areas came back with different checksums (possibly modified during the sdkm flash process?) and 46 new files (beyond the 436 files common to both areas) only existed in my Linux_for_Tegra directory (presumably were files generated by the sdkm during flashing). I’ve listed the breakdown below and will move on to the rootfs next.

4 files common to Tegra186_Linux_R32.5.1_aarch64.tbz2 and Linux_for_Tegra with different checksums - presumably modified by the sdkm during flashing

  • /bootloader/adsp-fw.bin
  • /bootloader/eks.img
  • /bootloader/nvtboot_applet_t194.bin
  • /bootloader/spe_t194.bin

46 files found only under my pre-existing Linux_for_Tegra directory - presumably generated by the sdkm during flashing

  • /bootloader/__pycache__/tegraflash_internal.cpython-36.pyc
  • /bootloader/badpage.bin
  • /bootloader/boot.img
  • /bootloader/boot.img.sb
  • /bootloader/cvm.bin
  • /bootloader/emmc_bootblob_ver.txt
  • /bootloader/flash_parameters.txt
  • /bootloader/flash_win.bat
  • /bootloader/flash.xml
  • /bootloader/flash.xml.sb
  • /bootloader/flashcmd.txt
  • /bootloader/initrd
  • /bootloader/kernel_bootctrl.bin
  • /bootloader/kernel_tegra194-p2888-0001-p2822-0000.dtb
  • /bootloader/kernel_tegra194-p2888-0001-p2822-0000.dtb.sb
  • /bootloader/mb1_t194_prod_sigheader_encrypt.bin
  • /bootloader/mb1_t194_prod_sigheader.bin
  • /bootloader/mb1_t194_prod_sigheader.hash
  • /bootloader/recovery.img
  • /bootloader/recovery.ramdisk
  • /bootloader/system.img
  • /bootloader/tegra194-a02-bpmp-p2888-a04.dtb
  • /bootloader/tegra194-br-bct-sdmmc.cfg
  • /bootloader/tegra194-mb1-bct-gpioint-p2888-0000-p2822-0000.cfg
  • /bootloader/tegra194-mb1-bct-memcfg-p2888.cfg
  • /bootloader/tegra194-mb1-bct-misc-flash.cfg
  • /bootloader/tegra194-mb1-bct-misc-l4t.cfg
  • /bootloader/tegra194-mb1-bct-pmic-p2888-0001-a04-E-0-p2822-0000.cfg
  • /bootloader/tegra194-mb1-bct-ratchet-p2888-0000-p2822-0000.cfg
  • /bootloader/tegra194-mb1-bct-reset-p2888-0000-p2822-0000.cfg
  • /bootloader/tegra194-mb1-bct-scr-cbb-mini.cfg
  • /bootloader/tegra194-mb1-soft-fuses-l4t.cfg
  • /bootloader/tegra194-mb1-uphy-lane-p2888-0000-p2822-0000.cfg
  • /bootloader/tegra194-memcfg-sw-override.cfg
  • /bootloader/tegra194-p2888-0001-p2822-0000.dtb
  • /bootloader/tegra194-p2888-0001-p2822-0000.dtb.rec
  • /bootloader/tegra19x-mb1-bct-device-sdmmc.cfg
  • /bootloader/tegra19x-mb1-padvoltage-p2888-0000-a00-p2822-0000-a00.cfg
  • /bootloader/tegra19x-mb1-pinmux-p2888-0000-a04-p2822-0000-b01.cfg
  • /bootloader/tegra19x-mb1-prod-p2888-0000-p2822-0000.cfg
  • /bootloader/temp_user_dir/boot_sigheader.img.encrypt
  • /bootloader/temp_user_dir/boot.img
  • /bootloader/temp_user_dir/kernel_tegra194-p2888-0001-p2822-0000_sigheader.dtb.encrypt
  • /bootloader/temp_user_dir/kernel_tegra194-p2888-0001-p2822-0000.dtb
  • /bootloader/temp_user_dir/kernel_tegra194-p2888-0001-p2822-0000.dtb.sig
  • /kernel/dtb/tegra194-p2888-0001-p2822-0000.dtb.rec