I used below command to update the flash to support A/B partition.
sudo ROOTFS_AB=1 ./tools/kernel_flash/l4t_initrd_flash.sh \
--external-device nvme0n1 \
-c ./tools/kernel_flash/flash_l4t_nvme_rootfs_ab.xml \
jetson-orin-nano-devkit-nvme \
external
and before updating the flash. I changed the instance num from 4 to 0 , in the file
Linux_for_Tegra/bootloader/generic/cfg/flash_t234_qspi_nvme.xml.
<device type="nvme" instance="**4**" sector_size="512" num_sectors="INT_NUM_SECTORS" >
<partition name="master_boot_record" type="protective_master_boot_record">
<allocation_policy> sequential </allocation_policy>
<filesystem_type> basic </filesystem_type>
<size> 512 </size>
<file_system_attribute> 0 </file_system_attribute>
<allocation_attribute> 8 </allocation_attribute>
<percent_reserved> 0 </percent_reserved>
</partition>
Bootup log as follow:
I> MB2 (version: 0.0.0.0-t234-54845784-af79ed0a)
I> t234-A01-1-Silicon (0x12347)
I> Boot-mode : Coldboot
I> Emulation:
I> Entry timestamp: 0x001ee607
I> Regular heap: [base:0x40040000, size:0x10000]
I> DMA heap: [base:0x173800000, size:0x800000]
I> Task: SE error check
I> Task: Crypto init
I> Task: MB2 Params integrity check
I> Task: Enable CCPLEX WDT 5th expiry
I> Task: ARI update carveout TZDRAM
I> Task: Configure OEM set LA/PTSA values
I> Task: Check MC errors
I> Task: Enable hot-plug capability
I> Task: PSC mailbox init
I> Task: Enable clock for external modules
I> Task: Measured Boot init
I> Task: fTPM silicon identity init
I> fTPM is not enabled.
I> Task: OEM SC7 context save init
I> Task: I2C register
I> Task: Map CCPLEX_INTERWORLD_SHMEM carveout
I> Task: Program CBB PCIE AMAP regions
I> Task: Boot device init
I> Boot_device: QSPI_FLASH instance: 0
I> Qspi clock source : pllc_out0
I> QSPI Flash: Macronix 64MB
I> QSPI-0l initialized successfully
I> Secondary storage device: QSPI_FLASH instance: 0
I> Secondary storage device: **NVME instance: 0**
I> Initializing nvme device instance 0
I> Initializing nvme controller
I> tegrabl_pcie_soc_preinit: (0):
I> Unpowergate
I> unknown unpowergate domain_id : 6
I> tegrabl_pcie_soc_init: (0):
I> APPL initialization ...
**!0x380 Exception! [elr:0x50049900, spsr:0x600002cd, esr:0xbe000011, far:0x0]**
Could you provide the source code of MB2? Or explanation of failure with “NVME instance: 0”?
We have guidance on flashing with rootfs redundancy enabled:
https://docs.nvidia.com/jetson/archives/r36.3/DeveloperGuide/SD/FlashingSupport.html#using-initrd-flash-with-orin-nx-and-nano
jacky_gong:
and before updating the flash. I changed the instance num from 4 to 0 , in the file
Linux_for_Tegra/bootloader/generic/cfg/flash_t234_qspi_nvme.xml.
<device type="nvme" instance="**4**" sector_size="512" num_sectors="INT_NUM_SECTORS" >
<partition name="master_boot_record" type="protective_master_boot_record">
<allocation_policy> sequential </allocation_policy>
<filesystem_type> basic </filesystem_type>
<size> 512 </size>
<file_system_attribute> 0 </file_system_attribute>
<allocation_attribute> 8 </allocation_attribute>
<percent_reserved> 0 </percent_reserved>
</partition>
It’s meaningless to modify this file while you are flashing with /tools/kernel_flash/flash_l4t_nvme_rootfs_ab.xml
…
Also, why are you making such changes?
Yes, you are right, I didn’t want to change this instance number.
But, When I use “tools/create_nvme_disk_image.sh” to create disk image, It will report
process part_num=15;part_name=reserved;part_size=502792192;part_file=;part_type=8300 exited
process part_num=16;part_name=master_boot_record;part_size=512;part_file=;part_type=8300 started
dd: error reading '/media/Jetson/R36.3/Linux_for_Tegra/bootloader/signed/': Is a directory
According to the python script “bootloader/tegraflash_impl_t234.py”, master_boot_record not support instance 4 , only support instance 0.
tegraflash_gpt_image_name_map = {
'nvme_0_master_boot_record': 'mbr_12_0.bin',
'nvme_0_primary_gpt': 'gpt_primary_12_0.bin',
'nvme_0_secondary_gpt': 'gpt_secondary_12_0.bin',
'sdcard_0_master_boot_record': 'mbr_6_0.bin',
'sdcard_0_primary_gpt': 'gpt_primary_6_0.bin',
'sdcard_0_secondary_gpt': 'gpt_secondary_6_0.bin',
It seems like there are two choices,
One is changing instance num to 0 in flash_t234_qspi_nvme.xml,
Two is adding “‘nvme_4_master_boot_record’: ‘mbr_12_4.bin’,”
I selected the one solution. But it case this new issue.
What do you think ? Do you have other official solution to resolve “create_nvme_disk_image.sh” issue?
jacky_gong:
But, When I use “tools/create_nvme_disk_image.sh” to create disk image, It will report
process part_num=15;part_name=reserved;part_size=502792192;part_file=;part_type=8300 exited
process part_num=16;part_name=master_boot_record;part_size=512;part_file=;part_type=8300 started
dd: error reading '/media/Jetson/R36.3/Linux_for_Tegra/bootloader/signed/': Is a directory
We don’t have any scripts called create_nvme_disk_image.sh
…
Where did you get this stuff?
Sorry, I forgot that this file is created by my colleagues.
tools/create_nvme_disk_image.sh
#!/bin/bash
script_name="$(basename "${0}")"
l4t_tools_dir="$(cd "$(dirname "${0}")" && pwd)"
l4t_dir="${l4t_tools_dir%/*}"
sudo ${l4t_tools_dir}/jetson-disk-image-creator.sh -o nvme-blob.img -b jetson-orin-nano-devkit-nvme -d NVME && sudo gzip -v -9 nvme-blob.img
and jetson-disk-image-creator.sh is also be add nvme support.
#!/bin/bash
# SPDX-FileCopyrightText: Copyright (c) 2019-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
# This is a script to generate the SD card flashable image for
# jetson-xavier-nx-devkit and jetson-agx-orin-devkit platforms
set -e
function usage()
{
if [ -n "${1}" ]; then
echo "${1}"
fi
echo "Usage:"
echo "${script_name} -o <sd_blob_name> -b <board> -r <revision> -d <device>"
echo " sd_blob_name - valid file name"
echo " board - board name. Supported boards are:"
echo " jetson-xavier-nx-devkit"
echo " jetson-agx-xavier-devkit"
echo " jetson-agx-orin-devkit"
echo " jetson-orin-nano-devkit"
echo " jetson-orin-nano-devkit-nvme"
echo " revision - SKU revision number"
echo " jetson-xavier-nx-devkit: default"
echo " jetson-agx-xavier-devkit: default"
echo " jetson-agx-orin-devkit: default"
echo " jetson-orin-nano-devkit: default"
echo " device - Root filesystem device"
echo " jetson-xavier-nx-devkit: SD/USB"
echo " jetson-agx-xavier-devkit: SD/USB"
echo " jetson-agx-orin-devkit: SD/USB"
echo " jetson-orin-nano-devkit: SD/USB"
echo " jetson-orin-nano-devkit-nvme: NVME/USB"
echo "Example:"
echo "${script_name} -o sd-blob.img -b jetson-xavier-nx-devkit -d SD"
echo "${script_name} -o sd-blob.img -b jetson-agx-orin-devkit -d USB"
echo "${script_name} -o nvme-blob.img -b jetson-orin-nano-devkit-nvme -d NVME"
exit 1
}
function cleanup() {
set +e
if [ -n "${tmpdir}" ]; then
umount "${tmpdir}"
rmdir "${tmpdir}"
fi
if [ -n "${loop_dev}" ]; then
losetup -d "${loop_dev}"
fi
}
trap cleanup EXIT
function check_device()
{
case "${board}" in
jetson-xavier-nx-devkit)
case "${rootfs_dev}" in
"SD" | "sd")
rootfs_dev="mmcblk0p1"
;;
"USB" | "usb")
rootfs_dev="sda1"
;;
*)
usage "Incorrect root filesystem device - Supported devices - SD, USB"
;;
esac
;;
jetson-agx-xavier-devkit)
case "${rootfs_dev}" in
"SD" | "sd")
rootfs_dev="mmcblk1p1"
;;
"USB" | "usb")
rootfs_dev="sda1"
;;
*)
usage "Incorrect root filesystem device - Supported devices - SD, USB"
;;
esac
;;
jetson-agx-orin-devkit)
case "${rootfs_dev}" in
"SD" | "sd")
rootfs_dev="mmcblk1p1"
;;
"USB" | "usb")
rootfs_dev="sda1"
;;
*)
usage "Incorrect root filesystem device - Supported devices - SD, USB"
;;
esac
;;
jetson-orin-nano-devkit)
case "${rootfs_dev}" in
"SD" | "sd")
rootfs_dev="mmcblk0p1"
;;
"USB" | "usb")
rootfs_dev="sda1"
;;
*)
usage "Incorrect root filesystem device - Supported devices - SD, USB"
;;
esac
;;
jetson-orin-nano-devkit-nvme)
case "${rootfs_dev}" in
"NVME" | "nvme")
rootfs_dev="nvme0n1p1"
;;
"USB" | "usb")
rootfs_dev="sda1"
;;
*)
usage "Incorrect root filesystem device - Supported devices - NVME, USB"
;;
esac
;;
esac
}
function check_revision()
{
case "${board}" in
jetson-xavier-nx-devkit)
rev="000"
;;
esac
}
function check_pre_req()
{
if [ $(id -u) -ne 0 ]; then
echo "ERROR: This script requires root privilege" > /dev/stderr
usage
exit 1
fi
while [ -n "${1}" ]; do
case "${1}" in
-h | --help)
usage
;;
-b | --board)
[ -n "${2}" ] || usage "Not enough parameters"
board="${2}"
shift 2
;;
-o | --outname)
[ -n "${2}" ] || usage "Not enough parameters"
sd_blob_name="${2}"
shift 2
;;
-r | --revision)
[ -n "${2}" ] || usage "Not enough parameters"
rev="${2}"
shift 2
;;
-d | --device)
[ -n "${2}" ] || usage "Not enough parameters"
rootfs_dev="${2}"
shift 2
;;
*)
usage "Unknown option: ${1}"
;;
esac
done
if [ "${board}" == "" ]; then
echo "ERROR: Invalid board name" > /dev/stderr
usage
else
case "${board}" in
jetson-xavier-nx-devkit)
boardid="3668"
target="jetson-xavier-nx-devkit"
storage="sdcard"
;;
jetson-agx-xavier-devkit)
boardid="2888"
target="jetson-agx-xavier-devkit"
storage="sdmmc_user"
;;
jetson-agx-orin-devkit)
boardid="3701"
target="jetson-agx-orin-devkit"
storage="sdmmc_user"
;;
jetson-orin-nano-devkit)
boardid="3767"
boardsku="0005"
target="jetson-orin-nano-devkit"
storage="sdcard"
;;
jetson-orin-nano-devkit-nvme)
boardid="3767"
boardsku="0005"
target="jetson-orin-nano-devkit-nvme"
storage="nvme"
;;
*)
usage "Unknown board: ${board}"
;;
esac
fi
check_revision
check_device
if [ "${sd_blob_name}" == "" ]; then
echo "ERROR: Invalid SD blob image name" > /dev/stderr
usage
fi
if [ ! -f "${l4t_dir}/flash.sh" ]; then
echo "ERROR: ${l4t_dir}/flash.sh is not found" > /dev/stderr
usage
fi
if [ ! -f "${l4t_tools_dir}/nvptparser.py" ]; then
echo "ERROR: ${l4t_tools_dir}/nvptparser.py is not found" > /dev/stderr
usage
fi
if [ ! -d "${bootloader_dir}" ]; then
echo "ERROR: ${bootloader_dir} directory not found" > /dev/stderr
usage
fi
if [ ! -d "${rfs_dir}" ]; then
echo "ERROR: ${rfs_dir} directory not found" > /dev/stderr
usage
fi
}
function create_raw_image()
{
# Calulate raw image size by accumulating partition size with 1MB (2048-sector * 512) round up and plus 2MB for GPTs
sd_blob_size=$("${l4t_tools_dir}/nvptparser.py" "${signed_image_dir}/${signed_cfg}" "${storage}" | awk -F'[=;]' '{sum += (int($6 / (2048 * 512)) + 1)} END {printf "%dM\n", sum + 2}')
echo "${script_name} - creating ${sd_blob_name} of ${sd_blob_size}..."
dd if=/dev/zero of="${sd_blob_name}" bs=1 count=0 seek="${sd_blob_size}"
}
function create_signed_images()
{
echo "${script_name} - creating signed images"
pushd "${l4t_dir}"
# rootfs size = rfs_dir size + extra 10% for ext4 metadata and safety margin
rootfs_size=$(du -ms "${rfs_dir}" | awk '{print $1}')
rootfs_size=$((rootfs_size + (rootfs_size / 10) + 100))
# Generate signed images
BOARDID="${boardid}" BOARDSKU="${boardsku}" FAB="${rev}" BUILD_SD_IMAGE=1 BOOTDEV="${rootfs_dev}" "${l4t_dir}/flash.sh" "--no-flash" "--sign" "-S" "${rootfs_size}MiB" "${target}" "${rootfs_dev}"
popd
if [ ! -f "${bootloader_dir}/flashcmd.txt" ]; then
echo "ERROR: ${bootloader_dir}/flashcmd.txt not found" > /dev/stderr
exit 1
fi
if [ ! -d "${signed_image_dir}" ]; then
echo "ERROR: ${bootloader_dir}/signed directory not found" > /dev/stderr
exit 1
fi
chipid=$(sed -nr 's/.*chip ([^ ]*).*/\1/p' "${bootloader_dir}/flashcmd.txt")
if [ "${chipid}" = "0x21" ]; then
signed_cfg="flash.xml"
else
signed_cfg="flash.xml.tmp"
fi
if [ ! -f "${signed_image_dir}/${signed_cfg}" ]; then
echo "ERROR: ${signed_image_dir}/${signed_cfg} not found" > /dev/stderr
exit 1
fi
}
function create_partitions()
{
echo "${script_name} - create partitions"
partitions=($("${l4t_tools_dir}/nvptparser.py" "${signed_image_dir}/${signed_cfg}" "${storage}"))
sgdisk -og "${sd_blob_name}"
for part in "${partitions[@]}"; do
eval "${part}"
if [ "${part_name}" = "master_boot_record" ]; then
continue
fi
part_size=$((${part_size} / 512)) # convert to sectors
sgdisk -n "${part_num}":0:+"${part_size}" \
-c "${part_num}":"${part_name}" \
-t "${part_num}":"${part_type}" "${sd_blob_name}"
done
}
function write_partitions()
{
echo "${script_name} - write partitions"
loop_dev="$(losetup --show -f -P "${sd_blob_name}")"
for part in "${partitions[@]}"; do
echo process ${part} started
eval "${part}"
target_file=""
if [ "${part_name}" = "APP" ]; then
target_file="${bootloader_dir}/${part_file}.raw"
elif [ -e "${signed_image_dir}/${part_file}" ]; then
target_file="${signed_image_dir}/${part_file}"
elif [ -e "${bootloader_dir}/${part_file}" ]; then
target_file="${bootloader_dir}/${part_file}"
fi
if [ "${part_name}" = "master_boot_record" ]; then
dd conv=notrunc if="${signed_image_dir}/${part_file}" of="${sd_blob_name}" bs="${part_size}" count=1
echo process ${part} conitnued
continue
fi
if [ "${target_file}" != "" ] && [ "${part_file}" != "" ]; then
echo "${script_name} - writing ${target_file}"
sudo dd if="${target_file}" of="${loop_dev}p${part_num}"
fi
echo process ${part} finished
done
losetup -d "${loop_dev}"
loop_dev=""
}
boardsku=""
sd_blob_name=""
sd_blob_size=""
script_name="$(basename "${0}")"
l4t_tools_dir="$(cd "$(dirname "${0}")" && pwd)"
l4t_dir="${l4t_tools_dir%/*}"
if [ -z "${ROOTFS_DIR}" ]; then
rfs_dir="${l4t_dir}/rootfs"
else
rfs_dir="${ROOTFS_DIR}"
fi
bootloader_dir="${l4t_dir}/bootloader"
signed_image_dir="${bootloader_dir}/signed"
loop_dev=""
tmpdir=""
echo "********************************************"
echo " Jetson Disk Image Creation Tool "
echo "********************************************"
check_pre_req "${@}"
create_signed_images
create_raw_image
create_partitions
write_partitions
echo "********************************************"
echo " Jetson Disk Image Creation Complete "
echo "********************************************"
difference from the original code.
I don’t know what this has to do with your original issue.
The NVMe image, not to mention whether it’s working or not, and flashing with rootfs AB are completely two different things.
Sorry, Please temporarily ignore A/B issue.
Could you help to check why adding nvme support in “jetson-disk-image-creator.sh” will cause dd error?
What’s the right solution to support nvme in “jetson-disk-image-creator.sh”?
I don’t see anything about dd in this entire post.
You didn’t even mention the complete workflow.
reported by this code in tools/jetson-disk-image-creator.sh.
function write_partitions()
{
echo "${script_name} - write partitions"
loop_dev="$(losetup --show -f -P "${sd_blob_name}")"
for part in "${partitions[@]}"; do
echo process ${part} started
eval "${part}"
target_file=""
if [ "${part_name}" = "APP" ]; then
target_file="${bootloader_dir}/${part_file}.raw"
elif [ -e "${signed_image_dir}/${part_file}" ]; then
target_file="${signed_image_dir}/${part_file}"
elif [ -e "${bootloader_dir}/${part_file}" ]; then
target_file="${bootloader_dir}/${part_file}"
fi
**if [ "${part_name}" = "master_boot_record" ]; then**
** dd conv=notrunc if="${signed_image_dir}/${part_file}" of="${sd_blob_name}" bs="${part_size}" count=1**
** echo process ${part} conitnued**
** continue**
** fi**
if [ "${target_file}" != "" ] && [ "${part_file}" != "" ]; then
echo "${script_name} - writing ${target_file}"
sudo dd if="${target_file}" of="${loop_dev}p${part_num}"
fi
echo process ${part} finished
done
losetup -d "${loop_dev}"
loop_dev=""
}
The purpose is that we want to create one disk image for offline burning.
But we found that the original script tools/jetson-disk-image-creator.sh is only support SD and USB.
So, we try to add creating nvme image support.
Are you able to create an image for SD card with your current script?
The dd error does not look related to your own changes.
Please also put the complete log when the script is run.
I’m also asking for the log of the failure case with NVMe.
We will need some more time to check it.
jacky_gong:
It seems like there are two choices,
One is changing instance num to 0 in flash_t234_qspi_nvme.xml,
Two is adding “‘nvme_4_master_boot_record’: ‘mbr_12_4.bin’,”
I selected the one solution. But it case this new issue.
I just tried the second method, which works correctly.
For NVMe disks on Orin Nano DevKit board, the instance number is tied to the physical connectors (C4/C7) so you cannot change it arbitrarily.
system
Closed
August 2, 2024, 8:10am
20
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.