I’m using Jetson Orin Nano dev board, JetPack-6.2.
I switched spi0 into the slave mode by changing ‘compatible = “nvidia,tegra210-spi\0nvidia,tegra114-spi”;’ to ‘compatible = “nvidia,tegra210-spi-slave”;’ for the “spi@3210000” node in the /boot/dtb/kernel_tegra234-p3768-0000+p3767-0005-nv-super.dtb file. I was following this post: Origin nano spi
I start seeing SPI slave in dmesg:
agostrer@ubuntu:~$ sudo dmesg|grep spi
[ 10.678715] spi-tegra124-slave 3210000.spi: Adding to iommu group 1
[ 10.682990] spi-tegra124-slave 3210000.spi: Dynamic bus number will be registered
[ 10.698373] spi-tegra114 3230000.spi: Adding to iommu group 1
[ 420.598462] spi-tegra124-slave 3210000.spi: Tx is not supported in mode 0
[ 420.598503] spi-tegra124-slave 3210000.spi: spi can not start transfer, err -22
[ 420.598559] spi_master spi0: failed to transfer one message from queue
[ 949.159991] spidev spi0.0: setup: unsupported mode bits 20
@DaneLLL, the reference setup looks clearly defined but I’m not sure about the patch to SPI driver. I see two different links under discussion you mentioned to spi-tegra114.txt files:
Both are not patches, just source code. What should I do with it? Just replace the source/out/nvidia-linux-header/drivers/spi/spi-tegra114.c and rebuild the kernel? Why do I need it for the SPI-slave? The SPI-slave is using the spi-tegra124-slave driver, not spi-tegra114…
static void print_usage(const char *prog)
{
printf("Usage: %s [-2348CDFHILMNORSZbdilopsvw]\n", prog);
puts("general device settings:\n"
" -D --device device to use (default /dev/spidev1.1)\n"
" -s --speed max speed (Hz)\n"
" -d --delay delay (usec)\n"
" -w --word-delay word delay (usec)\n"
" -l --loop loopback\n"
"spi mode:\n"
" -H --cpha clock phase\n"
" -O --cpol clock polarity\n"
" -F --rx-cpha-flip flip CPHA on Rx only xfer\n"
"number of wires for transmission:\n"
" -2 --dual dual transfer\n"
" -4 --quad quad transfer\n"
" -8 --octal octal transfer\n"
" -3 --3wire SI/SO signals shared\n"
" -Z --3wire-hiz high impedance turnaround\n"
"data:\n"
" -i --input input data from a file (e.g. \"test.bin\")\n"
" -o --output output data to a file (e.g. \"results.bin\")\n"
" -p Send data (e.g. \"1234\\xde\\xad\")\n"
" -S --size transfer size\n"
" -I --iter iterations\n"
"additional parameters:\n"
" -b --bpw bits per word\n"
" -L --lsb least significant bit first\n"
" -C --cs-high chip select active high\n"
" -N --no-cs no chip select\n"
" -R --ready slave pulls low to pause\n"
" -M --mosi-idle-low leave mosi line low when idle\n"
"misc:\n"
" -v --verbose Verbose (show tx buffer)\n");
exit(1);
}
with the binary help:
agostrer@ubuntu:~/spi$ ./spidev_test_nvdia -h
Usage: ./spidev_test_nvdia [options]
version 21
-D --device device to use (default /dev/spidev0.0)
-s --speed max speed (Hz)
-g --length transaction length in bytes. Multiple lengths can be comma separated.
-z --debug enable debug mode. Specifying more than once to increase the verboisty.
-p --pattern choose pattern type. Available patterns are:
0: sequential bytes
1: even bytes
2: odd bytes
3: reverse sequential bytes
4: random bytes with crc
-F prefix Dump rx/tx buffer as raw to file. With no prefix specified, current pid will be used
-f --file Pattern file. User defined pattern. Not allowed with -p option.
Pattern file must be in space seperated bytes in hex without
"0x" prefix. Sample data: 01 AA 23 5F 3C 12
-d --delay delay (usec). For SPI master this will be the minimum delay
the driver waits before initiating each transfer. For slave
this indicates the max delay that the slave waits before it times out.
-E --expect Expected transaction length in bytes.
To test variable length data transfer feature.
Without this option, spidev_test will assume it as normal transfer
and will validate the actual transferred length with requested transaction length.
Value should be less than or equal to the transaction length.
-b --bpw bits per word
-H --cpha clock phase
-O --cpol clock polarity
-L --lsb least significant bit first
-C --cs-high chip select active high
-n --nruns Number of times to repeat the test. -1 is infinite.
-u --udelay delay b/w initiating each transfer (usec). Multiple delays can be comma separated
-r --receive only receive data
-t --transmit only transmit data
-v --minsize variable length packet start
-V --maxsize variable length packet end
-W --waitb4 wait for a keystroke before the first transfer
-w --stoperr stop on all errors
-P --packet packet mode
Examples:
To transfer 100 messages of 30 bytes each with random bytes,
spidev_test -D/dev/spidev0.0 -s18000000 -n100 -g30 -p4
To transfer 100 messages of sizes 8 and 3986 with delay of 2ms and 8ms respectively,
spidev_test -D/dev/spidev0.0 -s18000000 -n100 -g8,3968 -u2000,8000
To transfer all bytes from user defined pattern file,
spidev_test -D/dev/spidev0.0 -s18000000 -f/path/to/patternfile
To test variable length feature,
spidev_test -D/dev/spidev0.0 -s18000000 -g 30 -E 20
Return codes:
0 successfull transfer. This would mean that what was
transferred was actually received. Note that a success
would make sense only when the spi is run in a loopback
type configuration, i.e., a miso-mosi loopback or a
master-slave loopback
1 Invalid argument
5 I/O error
6 Data mismatch error. The transfer happened but there is
mismatch in tx and rx pattern. Again this makes sense only
for loopback as explained above
22 Invalid argument
agostrer@ubuntu:~/spi$
As you can see, there are a lot of differences. @ShaneCCC, maybe you have the answer?
Thank you,
Alex.
Hi @DaneLLL ,
I used the new spi-tegra114.c. I cannot say if I see any difference in the SPI slave behavior. It starts receiving then fails immediately. Without the source code I just don’t understand what is going on.
Thank you,
Alex.
Hi @KevinFFF ,
I set both spi@3210000 and spi@3230000 to be compatible with spi-slave (see attached device tree). Is there anything else I need to add to the device tree? It wasn’t clear from other posts. dmesg.log (68.8 KB) extracted_proc.txt (314.3 KB)
Uploaded requested files. Was needed to change .dts to .txt to get it uploaded.
I used “sudo /opt/nvidia/jetson-io/jetson-io.py” to setup GPIOs. I guessed that it controls pinmux through the DTS, was I wrong?
We need to receive data from another device that works as the SPI master - one SPI slave on Jetson should be enough. I set two just to be sure. But you must be right and using Jetson SPI1 as master may help to make sure that SPI3 is setup correctly - I’ll try it.
I still want to see the source code of your spidev_test (NVIDIA spidev_test is different from the torvalds. Somehow your spidev_test puts the slave SPI into the mode, when it waits indefinitely for the data. I don’t see this option in the off-the-shelf spidev_test. What command do you send from spidev_test to wait for the data indefinitely?
Any progress with the source codes? Did Neel talk to you about them?
I found that the NVIDIA spidev_test sends an additional ioctl command 0x40206b00 that I don’t find in the off-the-shelf spidev_test. Trying to find how it waits for data when in the slave mode. It’s really reverse engineering without the source code. Can you escalate it please?
Thank you,
Alex.
Jetson-IO can be used to configure the pins’ pinmux for SPI usages.
Good to hear that workflow works at your side. It also means that you’ve configured them correctly. (both SPI master and SPI slave)
Yes, we have our spidev_test written for tegra SPI.
I’m not sure if I can share the source here. Please let me check this with internal.
Do you mean the -r option for spedev_test tool?
Yes, I need to know how to put the slave into the receiving mode, how to specify the amount of data to be received, and when to read data. Can you respond to me (or your Santa Clara team) in the email? We have an NDA with NVIDIA.
I figured out how to put the slave into waiting mode and receive a specified number of bytes (took me several iterations of the kernel rebuild with debug messages in spidev.c and spi.c files). The problem that the SPI slave fails if (the expected number of bytes by the slave) != (transmitted number of bytes from the master). How to solve it? We don’t know how much data we get to the SPI slave interface - it’s just a constant stream.
Do you mean that you’ve customized the spidev_test tool to configure SPI slave stay in receiving mode and you can receive the expected data from SPI master?
May I know what’s the exact data from your SPI master?
and what do you want the SPI slave to handle with?
Yes, I just changed “#if 0” to “#if 1” in spidev_test.c:151 and changed spidev_test.c:533 line to “transfer(fd, NULL/default_tx/, default_rx, 8/sizeof(default_tx)/);”. Now it waits for the master transmit and receives 8 bytes. But fails if master sends any other number of bytes. BTW, I experimented with the NVIDIA spidev_test utility. It fails for me if the number of bytes on receive and transmit sides do not match.
It’s just a stream of binary data at ~8Mbps. Different binary protocols that we need to parse on-the-fly.
Could you also share the original spidev_test.c file you were using?
And yes, you can just configure default_TX as NULL when you just want to receive data.
It seems you should configure the max transfer size rather than the exact data size for SPI slave since data transaction should be controlled by CS pin from SPI master.
May I also know where did you get our spidev_test utility?