Veritysetup open fails for Jetson Nano developer kit

Hi! I’m trying to configure my Nvidia Nano developer kit 4GB to use “veritysetup open”

first issue with veritysetup open was related to DM_VERITY module not enabled in a kernel.
I followed existing topic ( DM-Verity support on Jetson Nano 4GB (B02) - Jetson & Embedded Systems / Jetson Nano - NVIDIA Developer Forums), and added following options to “public/kernel/kernel-4.9/arch/arm64/configs/tegra_defconfig” before compiling the kernel:

CONFIG_DM_VERITY=y # Enable DM-Verity
CONFIG_DM_VERITY_HASH_PREFETCH_MIN_SIZE_128=y # DM-Verity hash prefetch optimization

btw final .config file didn’t contain “CONFIG_DM_VERITY_HASH_PREFETCH_MIN_SIZE_128” param

after flashing veritysetup open command started to recognize DM_VERITY, but still failed with error:
Verity device detected corruption after activation

Error happens on any veritysetup open data, as a test I also tried code from a topic I mentioned:

Create data image

dd if=/dev/zero of=~/tmp/data_partition.img bs=4k count=256
mkfs.ext4 ~/tmp/data_partition.img
tune2fs -c0 -i0 ~/tmp/data_partition.img

Create a text file for testing

sudo mount -o loop data_partition.img /mnt/
cd /mnt/
sudo touch hello.txt
cd ~/tmp/
sudo umount /mnt

Create image for hashes

dd if=/dev/zero of=~/tmp/hash_partition.img bs=4k count=256
mkfs.ext4 ~/tmp/hash_partition.img
tune2fs -c0 -i0 ~/tmp/hash_partition.img

Setting up dm-verity

veritysetup -v --debug format data_partition.img hash_partition.img
sudo veritysetup open data_partition.img verity-test hash_partition.img [hashcode]

I also tried compiling different versions of veritysetup, extended logs, but corrupted status basically comes from dm, didn’t find anything specific that I can change in veritysetup to make it work.
Not sure if it helps, but I also tried running veritysetup in a privilleged docker container, and from here veritysetup open worked without errors, but further execution of veritysetup status command returned status corrupted afterwards.

I also tried to enablefec and to run veritysetup open with --fec-device flag, but in this case command just hangs in an endless attempt to fix corrupted data. dmesg logs:

[  +0,000002] device-mapper: verity-fec: 7:2: FEC: recursion too deep
[  +0,016200] device-mapper: verity-fec: 7:2: FEC: recursion too deep
[  +0,011909] device-mapper: verity-fec: 7:2: FEC: recursion too deep
[  +0,011980] device-mapper: verity-fec: 7:2: FEC: recursion too deep
[  +0,010901] device-mapper: verity-fec: 7:2: FEC: recursion too deep
[  +0,011534] device-mapper: verity-fec: 7:2: FEC: recursion too deep
[  +0,011458] device-mapper: verity-fec: 7:2: FEC: recursion too deep
[  +0,011574] device-mapper: verity-fec: 7:2: FEC: recursion too deep
[  +0,011519] device-mapper: verity-fec: 7:2: FEC: recursion too deep
[  +0,011161] device-mapper: verity-fec: 7:2: FEC: recursion too deep
[лис12 14:17] verity_fec_decode: 1516 callbacks suppressed

Perhaps I need to enable some other modules in a kernel?
Appreciate any suggestions
Thanks!

Hi skif.dh,

What’s your Jetpack version in use?

Could you share the result of the following command on your board?

$ zcat /proc/config.gz | grep -E  "CONFIG_DM_VERITY|CONFIG_DM_VERITY_HASH_PREFETCH_MIN_SIZE_128"

I’m using JetPack_4.6.4

zcat /proc/config.gz | grep -E "CONFIG_DM_VERITY|CONFIG_DM_VERITY_HASH_PREFETCH_MIN_SIZE_128"

CONFIG_DM_VERITY=y
# CONFIG_DM_VERITY_FEC is not set
# CONFIG_DM_VERITY_AVB is not set

It seems your kernel config not applied correctly because CONFIG_DM_VERITY_HASH_PREFETCH_MIN_SIZE_128 still not enabled.

other config params are applied correctly to the resulting build/.config file except CONFIG_DM_VERITY_HASH_PREFETCH_MIN_SIZE_128
I’m using following command (from How to build NVIDIA Jetson Nano kernel - RidgeRun Developer Wiki):

make -C kernel/kernel-4.9/ ARCH=arm64 O=$TEGRA_KERNEL_OUT LOCALVERSION=-tegra CROSS_COMPILE=${TOOLCHAIN_PREFIX} tegra_defconfig

Do you mean only CONFIG_DM_VERITY_HASH_PREFETCH_MIN_SIZE_128 not enabled in kernel config but other new added configs are included?

yes, I’ve got CONFIG_DM_VERITY=y applied, and also I tried previously setting CONFIG_DM_VERITY_FEC=y, which worked as well

It’s weird to lost only CONFIG_DM_VERITY_HASH_PREFETCH_MIN_SIZE_128 config.

Do you still hit this issue now?
If so, please also share the full dmesg for further check.

Hi, KevinFFF
yes, I’m still facing this issue
dmesg output:

[918381.552596] verity_handle_err: 90 callbacks suppressed
[918381.552599] device-mapper: verity: 7:2: metadata block 1 is corrupted
[918381.569474] device-mapper: verity: 7:2: metadata block 1 is corrupted
[918381.576292] buffer_io_error: 697 callbacks suppressed
[918381.576296] Buffer I/O error on dev dm-0, logical block 0, async page read
[918381.595407] device-mapper: verity: 7:2: metadata block 1 is corrupted
[918381.602060] Buffer I/O error on dev dm-0, logical block 0, async page read
[918381.613719] device-mapper: verity: 7:2: metadata block 1 is corrupted
[918381.620497] Buffer I/O error on dev dm-0, logical block 0, async page read
[918381.629047] device-mapper: verity: 7:2: metadata block 1 is corrupted
[918381.635667] Buffer I/O error on dev dm-0, logical block 0, async page read
[918381.647983] device-mapper: verity: 7:2: metadata block 1 is corrupted
[918381.654675] Buffer I/O error on dev dm-0, logical block 0, async page read
[918381.666518] device-mapper: verity: 7:2: metadata block 1 is corrupted
[918381.673533] Buffer I/O error on dev dm-0, logical block 0, async page read
[918381.685247] device-mapper: verity: 7:2: metadata block 1 is corrupted
[918381.691964] Buffer I/O error on dev dm-0, logical block 3, async page read
[918381.717390] device-mapper: verity: 7:2: metadata block 1 is corrupted
[918381.725903] device-mapper: verity: 7:2: metadata block 1 is corrupted
[918381.732546] Buffer I/O error on dev dm-0, logical block 0, async page read
[918381.744796] Buffer I/O error on dev dm-0, logical block 0, async page read
[918381.756732] Buffer I/O error on dev dm-0, logical block 0, async page read
[918381.958276] device-mapper: verity: 7:2: reached maximum errors

Could you share the detailed step how you reproduce this issue on the devkit?
just run veritysetup open?

That is correct, from the initial message of the topic:

sudo veritysetup open data_partition.img verity-test hash_partition.img [hashcode]