SPI0 interface configured as a slave in Jetson Xavier NX

Hello,

I am trying to configure the SPI0 interface as a slave in the Jetson Xavier NX (custom board, but it uses the exact same pinning on that interface as the devkit). Having a look at the documentation I followed the next steps:

  • In the device tree, I changed it so it uses the driver tegra124-spi-slave, as described in other forums.

  • Compile that dtb and flash it into the jetson

  • Used the jetson-io tool to enable then the correct pinmux

At the moment, I am unable to capture anything from SPI bus, so I would like to discard any configuration error at the Jetson side.

My questions are:

  1. Are these steps followed correct in that order to have it configured as slave?

  2. Do I need to do any extra configuration on the pinmux side to enable that interface as slave or like that is sufficient?

Hi borjabasket14,

Whatā€™s your Jetpack version in use?

To enable SPI slave mode, you could refer to the following topic for Xavier NX.
How to set to spi slave mode - Jetson & Embedded Systems / Jetson Xavier NX - NVIDIA Developer Forums

You could use devmem command to check current value of pinmux register for SPI related pins.

Hello,

Thanks. I have followed the steps from that link, and it helped, but I still cannot get it to run. A question:

  1. What should be the correct pinmux settings for that interface as a slave? I have configured that pinmux with the jetson-io.py utility, but after checing that link that you posted, the settings differ from what it says there

Whatā€™s your Jetpack version in use?

Could you help to provide current device tree and pinmux for further check?

Hi,

Release used is 32.6.1, with jetpack 4.9.

Device tree settings are as follow:

  • First, I attach the changes made by the jetson-io.py (refer to my first post to check the full steps I followed to configure the interface)
pinmux@2430000 {
		pinctrl-0 = <0x17f>;
		pinctrl-names = "default";
		compatible = "nvidia,tegra194-pinmux";
		reg = <0x0 0x2430000 0x0 0x17000 0x0 0xc300000 0x0 0x4000>;
		#gpio-range-cells = <0x3>;
		status = "okay";
		linux,phandle = <0xb0>;
		phandle = <0xb0>;

		exp-header-pinmux {
			phandle = <0x17f>;
			linux,phandle = <0x17f>;

			hdr40-pin26 {
				nvidia,lpdr = <0x0>;
				nvidia,enable-input = <0x1>;
				nvidia,tristate = <0x0>;
				nvidia,pull = <0x2>;
				nvidia,function = "spi1";
				nvidia,pins = "spi1_cs1_pz7";
			};

			hdr40-pin24 {
				nvidia,lpdr = <0x0>;
				nvidia,enable-input = <0x1>;
				nvidia,tristate = <0x0>;
				nvidia,pull = <0x2>;
				nvidia,function = "spi1";
				nvidia,pins = "spi1_cs0_pz6";
			};

			hdr40-pin23 {
				nvidia,lpdr = <0x0>;
				nvidia,enable-input = <0x1>;
				nvidia,tristate = <0x0>;
				nvidia,pull = <0x1>;
				nvidia,function = "spi1";
				nvidia,pins = "spi1_sck_pz3";
			};

			hdr40-pin21 {
				nvidia,lpdr = <0x0>;
				nvidia,enable-input = <0x1>;
				nvidia,tristate = <0x0>;
				nvidia,pull = <0x1>;
				nvidia,function = "spi1";
				nvidia,pins = "spi1_miso_pz4";
			};

			hdr40-pin19 {
				nvidia,lpdr = <0x0>;
				nvidia,enable-input = <0x1>;
				nvidia,tristate = <0x0>;
				nvidia,pull = <0x1>;
				nvidia,function = "spi1";
				nvidia,pins = "spi1_mosi_pz5";
			};
};

  • Device tree configuration for SPI0
spi@3210000 {
		compatible = "nvidia,tegra124-spi-slave";
		reg = <0x0 0x3210000 0x0 0x10000>;
		interrupts = <0x0 0x24 0x4>;
		#address-cells = <0x1>;
		#size-cells = <0x0>;
		iommus = <0x2 0x20>;
		dma-coherent;
		dmas = <0x1b 0xf 0x1b 0xf>;
		dma-names = "rx", "tx";
		nvidia,dma-request-selector = <0x1b 0xf>;
		spi-max-frequency = <0x3dfd240>;
		nvidia,clk-parents = "pll_p", "clk_m";
		clocks = <0x4 0x87 0x4 0x66 0x4 0xe>;
		clock-names = "spi", "pll_p", "clk_m";
		resets = <0x5 0x5b>;
		reset-names = "spi";
		status = "okay";
		nvidia,clock-always-on;
		linux,phandle = <0xf7>;
		phandle = <0xf7>;

		spi@0 {
			compatible = "tegra-spidev";
			reg = <0x0>;
			spi-max-frequency = <0x3dfd240>;

			controller-data {
				nvidia,slave-ready-gpio = <0x13 0x9b 0x0>;
				nvidia,enable-hw-based-cs;
				status = "okay";
			};
		};
	};

Current pinmux setting read with devmem2 is as follows:

CSO0 ā†’ 0x448
MISO ā†’ 0x444
CLK ā†’ 0x444
MOSI ā†’ 0x444

Thanks

If you are using R32.6.1, it should be Jetpack4.6.

Have you verified SPI1 could work as expected with loopback test?

Hi,

Yes sorry, it was a typo.

And no, I have not verified it with the loopback test. I will try that now. Any particular settings that I should apply to the pinmux and/or device tree? Or the same settings I am having should be sufficient?

You could refer to the following thread to do loopback test step-by-step.
Jetson Nano SPI Bus Not Working - #10 by KevinFFF
The thread is for Jetson Nano, but it should be the similar steps for Jetson Xavier NX.

Hello everybody,

Just for anybody out there having issues, here are the steps to follow:

  • look for the corresponding node in the device tree (for jetson xavier NX is spi@3210000, in the file tegra194-soc-spi.dtsi). The only thing you need to change there, instead of the ā€œnvidia, tegra186-spiā€ where it links to the driver, put ā€œnvidia,tegra186-spi-slaveā€. If you put the 124 like it is said on the forums, for me it would not work. I do not really understand this, as I had a look at the slave driver in the kernel sources and the strings link it always to the same driver.

  • With the jetson-io.py script, configure the pinmux

With this, the interface is then correctly configured as a slave.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.