Trying to enable SPI slave mode in Nano 2GB

Hi,
I’m trying to enable an SPI port in slave mode, but I’m not well versed in … many things.
I understand the concepts of SPI, and the idea of the DTB but I’ve seen many examples using macros that are not present in the base DTB I’m using (Default image, T32 version 5.1 sources)
I’ve seen references to a tegra210-spi-slave that is not in the source tree, and others to tegra124-spi-slave (which is there).
I do not fully understand the values needed by the driver, nor the required dma/clock/interrupt/reset resources to be referenced when using SPI0 (@7000d400) or SPI1 (@7000d600).

My goal is to keep SPIO as master and have SPI1 as slave to be able to test/debug communication in a single board.

Questions:
1- Is there a reference (explained?) DTB for a slave SPI ?
Would be nice if that included the pin definitions too.
2-Is this documented somewhere so I can do the honours ? (RTFM :-)

Thanks!

You can configure the SPI0/SPI1 by jetson-io and run spidev_test for loopback test.
Have a reference to below topic for the spidev_test loopback test.

Shane,
thanks, loopback test works fine, in master mode obviously. But I want to make master-slave to actually test both sides of the protocol.
jetson-io goes upto the point of assigning pins and drivers, but AFAIK both ports in master mode.
So my questions stand.
After setting the second SPI port as slave by changing the driver to tegra124-spi-slave, I get an error when trying the test mentioned elsewhere:
spi-tegra124-slave 7000d600.spi: Tx is not supported in mode 0
spi-tegra124-slave 7000d600.spi: spi can not start transfer, err -22
so I’m missing something it seems…

Well, I have to read about that “mode 0”. It seems that the slave has no support for buffering TX, but if you ask the slave just to read (and the master just to write) it works.
Still, I’m playing sorcerer’s apprentice here: my slave side has some secret sauce that I don’t know what it does, nor if it’s needed:
spi1_0 {
compatible = “spidev”;
reg = <0x0>;
spi-max-frequency = <0x1f78a40>;
controller-data {
nvidia,cs-setup-clk-count = <0x1e>;
nvidia,cs-hold-clk-count = <0x1e>;
nvidia,rx-clk-tap-delay = <0x1f>;
nvidia,tx-clk-tap-delay = <0x00>;
};
};
which comes from one of the many posts I’ve read on the subject, but I’ve found no docs about.

FTR, here’s the two commands of the test and their output:

$ ./spidev_test -D/dev/spidev1.0 -n1 -s20000 -g12 -p4 -zz -r
Disabling transmit
using device: /dev/spidev1.0
setting spi mode for read,write
setting spi bpw
setting max speed for rd/wr
spi mode: 0
bits per word: 8 bytes per word: 1
max speed: 20000 Hz (20 KHz)
no. runs: 1
Using seed:0x6051e2c7
loop count = 0
transfer: Return actual transfer length: 12
receive packet bytes [12]
0000: FE EE CA AB FF FF FF FF 00 00 00 00
/dev/spidev1.0: TEST PASSED

$ ./spidev_test -D/dev/spidev0.0 -n1 -s20000 -g12 -p4 -zz -t
Disabling receive
using device: /dev/spidev0.0
setting spi mode for read,write
setting spi bpw
setting max speed for rd/wr
spi mode: 0
bits per word: 8 bytes per word: 1
max speed: 20000 Hz (20 KHz)
no. runs: 1
Using seed:0x6051e2cd
loop count = 0
Using rand() buffer
Using crc check
transfer packet bytes [12]
0000: FE EE CA AB FF FF FF FF 00 00 00 00
/dev/spidev0.0: TEST PASSED

The read blocks until the write comes… nice.

FTR, using the “internal” spidev_test, and using -H (clock phase) then bidirectional tests work.
like:

./spidev_test_nv -zz -D/dev/spidev1.0 -n2 -s10000000 -g16 -p1 -H -d1000000 & sleep 1;
./spidev_test_nv -zz -D/dev/spidev0.0 -n2 -s10000000 -g16 -p1 -H -d100

does interchange 2 x 16 bytes messages full duplex.

I don’t really understand why the said spidev_test source code is not made available though…