I have another question about the SPI2 on SPE-FW. The following picture is the SPI2 Timing (SCK, MOSI, MISO, CS) obtained through logic analyzer. We can find from the SCK channel that the SPI clock output High time (T_clkh) is 60ns or 40ns and the SPI output Low time (T_clkl) is 100ns or 120ns. The frequency of SPI2 I set is 6.25Mbit/s, i.e., the clock time (T_clk) is 160ns.
AGX SPI2 communicating with the slave
However, my slave SPI device requires that neither the output High time nor the output Low time exceed half of the clock cycle, as the following Table :
This problem may cause me to be unable to establish communication with the slave device. Because, using the same configuration (Mode0, MSB, 6.25Mbit/s), I established SPI communication with the slave device using STM32H743. The data flow is as follows:
STM32H7 SPI communicating with the slave, where T_clkh = T_clkl =T_clk/2=80ns
Therefore, I want to know which register settings can be changed to make the output High time and the output Low time exactly equal, i.e., T_clkh = T_clkl =T_clk/2.
In TRM mannual (Page 8045), I found something about
programmable trimmers in the SPI chapter:
Can these trimmers be used to adjust the duty cycle of the high and low levels of the SPI clock signal? If yes, how to use them in the SPE-FW?
Can you check SPI clock waveform with scope, instead of logic analyzer?
Internally I checked some SPI test results, and the ducy cycle should be 45-55%. Your result is weird.
I will also check it locally, but it may take time to setup the environment.
More updates later.
I enable spi1 and spi2 (spe-fw spi) in the Linux kernel by modifying the DTB file, and then using the command spiderv_ test conducted a test and found that spi1 is normal, but spi2 still has the above problems in the Linux kernel. I will also use a scope for a further testing.
I checked the waveform of SPI2_SCK, with
#define SPI_TEST_CLOCK_RATE 6500000
No obvious issue found.
Attach waveform for your reference.
My test device is AGX Xavier devkit 16G. Is your test AGX Xavier the same as mine? I think it may be a problem with only this type of device.
I designed a simple PCIE X1 adapter board myself to lead out the SPI2 pins.
The scope I used is Fluke 190 100MHz.
I checked the SPI2(SPE-FW) and SPI1(Linux kernel) with 12Mbit/s, 24Mbit/s. In addition, I can successfully set up communication with my SPI slave device by SPI1 in Linux kernel, but SPI2 in SPE-FW unable even SPI_TEST_CLOCK_RATE=4 Mbit/s. And the clock signal of SPI1 in my logic analyzer is also normal.
Yes, I’m testing with Xavier AGX devkit and as SPE SPI doc says, A5 in J6 (PCIe slot).
From your snapshot, I do not see big difference between SPI1 and SPI2, especially for duty cycle.
You can do more tests:
- In kernel side, with SPI1, which should work, capture clk/MISO/MOSI/CS.
- In SPE side, with SPI2, which should fail, capture same signals, and compare them.
From above snapshot, ducy-cycle may not be the issue.
What’s the bandwidth of logic analyzer you are using? Too low sample rate may result in inaccurate waveform.
The bandwidth of my logic analyzer is 100MHz. I test SPI2 and SPI1 under a same environment (CLOCK_RATE, line length, and a same logic analyzer). But the result is different.
I will follow your suggestions for a deep test. Thank you!
Hi, the duty cycle on your scope is same b/w SPI1 and SPI2. While on Xavier side, SPI1 is 3.3V, SPI2 is 1.8V. What’s the requested voltage level of your SPI device (assume 3.3V since SPI1 works)?
Thank you for your reminder. The requested voltage level of my SPI device is 3.3V. How to make the voltage of SPI2 3.3V？
I found that the cs-setup time and cs-hold time of SPI1 and SPI2 are very different. All tests are under 10Mbit/s SPI clock frequency. What is the reason for this? Does it affect communication?
The SPI1 configuration parameter in the DTB file:
nvidia,cs-setup-clk-count = <0x0a>;
nvidia,cs-hold-clk-count = <0x0a>;
The real value of them of SPI1 follow above configuration.
In SPI2, I get those times by the following command, and found the
0. That means that nvidia,cs-setup-clk-count = 0, nvidia,cs-hold-clk-count = 0.
timing_reg1 = tegra_spi_readl(tspi, SPI_TIMING_REG1_0);
SPI2 is 1.8V only.
There is a level shift on SPI1 lines which convert SPI1 to 3.3V available. This level shift could cause such hold time difference.
Is the same true for other development platforms that support SPE-FW, such as TX2 and Orin? That is, SPI in SPE-FW only supports 1.8V. Do I need to connect a level shift to SPI2 myself to achieve the same effect as SPI1? Is there a recommended level shift?
Level shift is necessary, you can refer to Xavier carrier board P2822 schematic in DLC for detail info. For other platforms, please also refer their reference designs in DLC.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.