Failing SPI with error by dvfs

Trying to just send something to the SPI, but it fails with:

tegra-dvfs: rate 408000000 too high for dvfs on sbc1
spi-tegra114 7000d400.spi: clk_prepare failed: -22
spi_master spi0: Failed to power device: -22

Currently using tegra210-p3448-0002-p3449-0000-b00.dtb and L4T R32.4.3

Previous tests on de nano devkit with SD worked fine. (L4T R32.4.3 and tegra210-p3448-0000-p3449-0000-a02.dtb)

Could you share the complete logs and also the device tree properties.

Here you are:
dvfs_issue.zip (28.2 KB)

Could you run the spidev_test to check?

I had to add the spidev_test code, because it wasn’t yet on the system.

Also I have forgot to mention that we are using the Yocto meta-tegra layer for this.

root@jetson-nano-emmc:~# ./spidev_test -D /dev/spidev0.0
[ 73.337596] tegra-dvfs: rate 408000000 too high for dvfs on sbc1
[ 73.344130] spi-tegra114 7000d400.spi: clk_prepare failed: -22
[ 73.350601] spi_master spi0: Failed to power device: -22
spi mode: 0x0
bits per word: 8
max speed: 500000 Hz (500 KHz)
RX | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | …

Any pointer, or tests I could do?

Please try to reduce the “spi-max-frequency” in device tree to check.

I have exactly the same issue, but it isn’t solved by reducing the spi-max-frequency.
Did you changed anything else to solve this issue?

1 Like

Make sure the dtb have been applied.

sudo dtc -I fs -O dts -o extracted_proc.dts /proc/device-tree

Dropping the spi speed is not an option in our case.
Currently we have avoid this problem by skipping the error and leaving the max. freq. possible.
You can do that by adjusting in drivers/soc/tegra/tegra-dvfs.c

 	if (rate > freqs[d->num_freqs - 1]) {
 		pr_warn("tegra-dvfs: rate %lu too high for dvfs on %s\n", rate,
 			d->clk_name);
+#if 0
 		return -EINVAL;
+#else
+		pr_warn("tegra-dvfs: max. rate %lu\n", rate);
+#endif
+
 	}

If you still hit the second issue (voltage) you could add the same workaround for that.
But just remind that this is not a correct solution.

This has been fixed in meta-tegra https://github.com/OE4T/meta-tegra/issues/454