Bug in spi-tegra114.c and other spi related things

I’ve been pulling my hair out trying to get the nvidia,clk_delay_between_packets, nvidia,cs_inactive_cycles, nvidia,cs-setup-clk-count and nvidia,cs-hold-clk-count spi slave attributes to work.

I finally got nvidia,cs-setup-clk-count and nvidia,cs-hold-clk-count to work so I can now set the time between CS going low and the clock starting, and the time between the clock stopping and the CS going high. This was only with the spi-tegra114.c patch in one of the other threads.

Getting nvidia,cs_inactive_cycles to work eluded me until I actually looked at the code. Here’s a snippet from tegra_spi_set_timing2.

if (!cdata || tspi->prod_list)
		return;
	if (!cdata->clk_delay_between_packets)
		return;
	if (cdata->cs_inactive_cycles) {
		u32 inactive_cycles;

		SPI_SET_CS_ACTIVE_BETWEEN_PACKETS(spi_cs_timing2,
						  spi->chip_select,

Based on that code, cs_inactive_cycles is ignored if clk_delay_between_packets is NOT set.

OK, now look at this snippet from tegra_spi_setup.

if (cdata && cdata->clk_delay_between_packets) {
	if (cdata->cs_inactive_cycles || !cstate->cs_gpio_valid) {
		dev_err(&spi->dev,
			"Invalid cs packet delay config\n");
		tegra_spi_cleanup(spi);
		return -EINVAL;
	}
	cdata->cs_inactive_cycles = cdata->clk_delay_between_packets;
}

According to that, clk_delay_between_packets and cs_inactive_cycles can’t both be set at the same time.

OK all clk_delay_between_packets does is set cs_inactive_cycles, so just set clk_delay_between_packets right? Unfortunately, setting clk_delay_between_packets requires that you use GPIOs for the chip selects. But if you use GPIOs for chip-selects, nvidia,cs-setup-clk-count and nvidia,cs-hold-clk-count no longer work. It’s also unclear just why clk_delay_between_packets can only be used with GPIO chip-selects since all it does it set cs_inactive_cycles. The work around is to either remove the check in tegra_spi_set_timing2 or remove the GPIO restriction in tegra_spi_setup.

Finally, I haven’t found any way to lower the time between transactions, only increase it. At 5MHz, with 8 bit words and only 1 byte in the transactions, the gap is a minimum of 48us. You can increase the gap using the “delay_usec” parameter on the write but that’s it. The gap seems to be somewhat proportional to the clock period but at 50MHz, the gap is still 32us and at 1HMz, the gap is about 108us. It doesn’t matter if it’s a PIO or DMA transfer. I’m guessing that the gap is a function of waiting for the transaction to complete or maybe a hardware restriction? Can someone from nvidia comment?

Hi gtj
Thanks for your report. We are have internal discussion with developer to check this. Will update it once get any comment.

@gti
Did you work on r32.x base?

Yeah, this is the kernel downloaded by running source_sync from r32.3.1.
The kernel branch is l4t-r32.3.1-4.9 and the last tag is tegra-l4t-r32.3.1

clk_delay_between_packets and cs_inactive_cycles can’t both be set at the same time
It is not possible. Don’t even understand the question either. It is a either/or case. How can cs be inactive and active at the same time?

why clk_delay_between_packets can only be used with GPIO chip-selects since all it does it set cs_inactive_cycles
Hardware does not support active delay between packets. Tt is clear from reading TRM. We are achieving delay between packets by setting inactive delay and using chip select as GPIO to trick SPI hw. SPI hw sets cs inactive between pack but since it is in GPIO mode, cs line is still active.
And since this requires sw chip select, setup and hold time are not possible. But they will always be much higher than what hw provides.

Finally, I haven’t found any way to lower the time between transactions
Using active delay will not help with perf. We need exact usecase to understand if we can reduce that further. If there are series of low size transactions, better to club them all into multiple transfers of single message.

I couldn’t find a TRM for the Nano so I’m using the X1 TRM (Chapter 37), just FYI.

OK, Let’s say I’m using hardware chip select and I want to control the interval between packets…

  • Timing register 2 has both CS_ACTIVE_BETWEEN_PACKETS and CYCLES_BETWEEN_PACKETS so that’s good.
  • The device tree controller-data has enable-hw-based-cs set and cs_inactive_cycles = 5.

Issues:

  • The base nvidia,p3449-0000-b00+p3448-0000-b00.dtb has “prod-settings” which completely disables the setting of controller-data in an overlay. You have to know add a “disabled” status to “prod-settings” in the overlay to get any of the timing parameters to work. Minor issue though since you can work around it.

  • If you look in the first code snippet from spi-tegra114.c tegra_spi_set_timing2() in my first post you’ll see that if clk_delay_between_packets is NOT set, cs_inactive_cycles is IGNORED. However, you can’t set clk_delay_between_packets with hardware chip select as shown in my second code snippet from tegra_spi_setup(). So you’re stuck. If you comment out that check for clk_delay_between_packets in tegra_spi_set_timing2(), cs_inactive_cycles works fine in hardware chip select mode.

So let’s say you now want to try software/gpio CS.

  • You set cs-gpios correctly in the device tree, remove enable-hw-based-cs from the controller data, and add clk_delay_between_packets = 5.

Issues:

  • If you look at the second code snippet from tegra_spi_setup() you’ll see that the checks will pass and cs_inactive_cycles is set to clk_delay_between_packets so cs_inactive_cycles is now 5. Back up in tegra_spi_set_timing2, the checks all pass and the delay is set correctly. Now here’s the issue… tegra_spi_setup() is not just run at device initialization, it’s run any time configuration settings like bits per word, mode, etc are set. Since cs_inactive_cycles is now 5, the next time the checks are run, they’ll fail with “Invalid cs packet delay config”.

To make matters worse, an spi-tegra114.c patch that you supplied in another forum post a few years ago that allows the setup and hold clock cycle settings to take effect isn’t merged into spi-tegra114.c, even in 32.3.1.

The result of all of this is that ALL OF THE CONTROLLER-DATA TIMING PARAMETERS ARE UNUSABLE!
Even the tap_delays are ignored because of the “prod-settings” issue.

Oh one other thing I forgot to mention…

The driver assumes that if you’ve set cs_inactive_cycles or clk_delay_between_packets, you want CS to NOT be active between packets. This is not a good assumption. It’s quite possible that someone would want the delay between the packets AND to keep CS active between them.

Read the last few posts in this thread…
https://devtalk.nvidia.com/default/topic/1069529/jetson-nano/spi-slave-can-only-receive-the-second-packet-from-nano-dev-kit-spi-master-/

I’d suggest a new boolean device tree parameter to specifically set cs-active-between-packets.


The X1 TRM documents SPI_TIMING_REG2_0 as supporting both CS_ACTIVE_BETWEEN_PACKETS and CYCLES_BETWEEN_PACKETS but it appears that it doesn't work.  If you set CS_ACTIVE_BETWEEN_PACKETS then CYCLES_BETWEEN_PACKETS seems to be ignored and is therefore 0.  That seems to be a bug in the hardware since setting CYCLES_BETWEEN_PACKETS to 0 defeats the purpose of setting CS_ACTIVE_BETWEEN_PACKETS.

There's a commit from a few years ago that implies it's handled in the driver...

```
commit b06b82826e64d3126235388fc4d3d9127078a6f4
Author: Laxman Dewangan <ldewangan@nvidia.com>
Date:   Mon Jun 27 08:28:16 2016 +0530

    Add support for delay between packets. SPI controller does
    not support this feature directly but it supports the
    chipselct inactive between packets and the number of clock
    cycle for inactive state.
    
    So to keep chipselect active between packets with delay, chipselect
    is converted to GPIO mode and set for proper level and configure
    controller to CS inactive between packets to get the packet delay.
```

It certainly doesn't work now and I doubt it ever did.

This is all very frustrating.

Hi
Please help to verify the problem with the patch from below link.

Thanks @ShaneCCC. I’ll try and test today or tomorrow.