SPI works with JetPack 3.3 but not JetPack 4.2

Hello,

I have modified the device tree using JetPack 3.3 and managed to get SPI working, including talking to an actual SPI slave device from the TX2. However, I wish to use the latest JetPack 4.2 (to be up-to-date and because other features work in 4.2 that I do not have working in 3.3), but when I make the same changes to the device tree for supporting SPI with JetPack 4.2, the SPI no longer works. I see my changes to the device tree, the /dev/spidevM.N devices show up, and I can open/read/write/close the SPI devices, but nothing comes out on the actual SPI pins.

With JetPack 3.3, disassembling /proc/device-tree on the TX2 shows my changes (for example):

spi@3240000 {
		#address-cells = <0x1>;
		#size-cells = <0x0>;
		#stream-id-cells = <0x1>;
		clock-names = "spi", "pll_p", "clk_m";
		clocks = <0xd 0x4a 0xd 0x10d 0xd 0x261>;
		compatible = "nvidia,tegra186-spi";
		dma-names = "rx", "tx";
		dmas = <0x19 0x12 0x19 0x12>;
		interrupts = <0x0 0x27 0x4>;
		linux,phandle = <0x7a>;
		nvidia,clk-parents = "pll_p", "clk_m";
		nvidia,dma-request-selector = <0x19 0x12>;
		phandle = <0x7a>;
		reg = <0x0 0x3240000 0x0 0x10000>;
		reset-names = "spi";
		resets = <0xd 0x2b>;
		status = "okay";

		spi@0 {
			compatible = "spidev";
			nvidia,cs-hold-clk-count = <0x1e>;
			nvidia,cs-setup-clk-count = <0x1e>;
			nvidia,enable-hw-based-cs;
			nvidia,rx-clk-tap-delay = <0x1f>;
 			nvidia,tx-clk-tap-delay = <0x0>;
			reg = <0x0>;
			spi-max-frequency = <0x3dfd240>;
		};
};

With JetPack 4.2, disassembling /proc/device-tree shows a very similar SPI node:

spi@3240000 {
		#address-cells = <0x1>;
		#size-cells = <0x0>;
		clock-names = "spi", "pll_p", "clk_m";
		clocks = <0x10 0x4a 0x10 0x10d 0x10 0x261>;
		compatible = "nvidia,tegra186-spi";
		dma-names = "rx", "tx";
		dmas = <0x25 0x12 0x25 0x12>;
		interrupts = <0x0 0x27 0x4>;
		iommus = <0x11 0x20>;
		linux,phandle = <0x18d>;
		nvidia,clk-parents = "pll_p", "clk_m";
		phandle = <0x18d>;
		reg = <0x0 0x3240000 0x0 0x10000>;
		reset-names = "spi";
		resets = <0x10 0x2b>;
		status = "okay";

		spi@0 {
			compatible = "spidev";
			nvidia,cs-hold-clk-count = <0x1e>;
			nvidia,cs-setup-clk-count = <0x1e>;
			nvidia,enable-hw-based-cs;
			nvidia,rx-clk-tap-delay = <0x1f>;
			nvidia,tx-clk-tap-delay = <0x0>;
			reg = <0x0>;
			spi-max-frequency = <0x3dfd240>;
		};
};

Other than changes in the way the IOMMU is handled, I can’t really see why this doesn’t work with JetPack 4.2. Are there additional changes to the device tree that need to be made to get SPI working with JetPack 4.2?

Any help would be appreciated.

Sincerely,
Dan

Have a reference to below topic to check the pinmux.

Thank you for the link. I have managed to get the SPI port working with devmem2 calls and am looking into modifying the Customer-Pinmux-Template spreadsheet so I can change the config file.

However, in the process of looking into this, I discovered that pin E13, which I would like to use for SPI chip select 1, is not in the spreadsheet. Is this an oversight, or is this pin not available to be used? It is labelled as SPI1 CS1 on the NVidia dev board schematic.

@dan.madill
It is float in module.

Thank you. BTW, I did get the SPI ports working based on the link you provided (other than pin E13, of course), by generating a new MB1 BCT config file via the Customer-Pinmux-Template spreadsheet and the generated DTSI files and pinmux-dts2cfg.py command.

Thank you for your help.

Best regards,
Dan

Hi Dan,
Please I have the same problem with getting SPI1 on TX2 J21 working. I have modified and compiled and flashed the dtb for spi@3240000 using the flash script on the Host to the TX2. I can see the spidev is in the /dev directory.

However I cannot seem to get any transitions on the SPI1 pins on J21. Please can you explain how you configured the pinmux? I presume this is on the Host. I use Jetpack 4.2 on the Host. I am new to the jetson platform coming from a Xilinx SoC background. I must say the documentation for Nvidia is not particularly helpful for newbies.

Thanks

  1. Download the Jetson-TX2-Generic-Customer-Pinmux-Template.xlsm spreadsheet from the NVidia Jetson TX2 download site.
  2. Open it in Excel and enable macros.
  3. For pin G13, change the customer usage to SPI4_SCK and the pin direction to Output
  4. For pin F14, change the customer usage to SPI4_DIN
  5. For pin F13, change the customer usage to SPI4_DOUT and the pin direction to Output
  6. For pin E14, change the customer usage to SPI4_CS0 and the pin direction to Output
  7. Click on the "Generate DT File" button
  8. For the board name, enter jetson-tx2-config-template (will need to do this twice)
  9. Copy the pinmux and gpio-default DTSI files generated to the $TEGRA_BASE/kernel/pinmux/t186 folder on your Ubuntu VM.
  10. In a Terminal window on the Ubuntu VM, perform the following commands:
export TEGRA_BASE=<wherever your JetPack_4.2_Linux_P3310/Linux_for_Tegra folder is>
cd $TEGRA_BASE/kernel/pinmux/t186
python pinmux-dts2cfg.py --pinmux addr_info.txt gpio_addr_info.txt  por_val.txt --mandatory_pinmux_file mandatory_pinmux.txt tegra18x-jetson-tx2-config-template-pinmux.dtsi tegra18x-jetson-tx2-config-template-gpio-default.dtsi 1.0 > $TEGRA_BASE/bootloader/t186ref/BCT/tegra186-mb1-bct-pinmux-quill-p3310-1000-c03.cfg

Build the kernel as usual and flash the Jetson.

Hi Dan,
Many thanks for your detailed explanation. I followed it to the detail however the terminal throws an error for line 3. The command exactly how i input it in the terminal:

HP-Z600-Workstation:~/nvidia/nvidia_sdk/JetPack_4.2_Linux_P3310/Linux_for_Tegra/kernel/pinmux/t186$ python3 pinmux-dts2cfg.py --pinmux addr_info.txt gpio_addr_info.txt  por_val.txt --mandatory_pinmux_file mandatory_pinmux.txt tegra18x-jetson-tx2-config-template-pinmux.dtsi tegra18x-jetson-tx2-config-template-gpio-default.dtsi 1.0 > $TEGRA_BASE/bootloader/t186ref/BCT/tegra186-mb1-bct-pinmux-quill-p3310-1000-c03.cfg

The error :

bash: usr1/nvidia/nvidia_sdk/JetPack_4.2_Linux_P3310/Linux_for_Tegra/bootloader/t186ref/BCT/tegra186-mb1-bct-pinmux-quill-p3310-1000-c03.cfg: No such file or directory

I checked the location for the file and I can see it there. I am unsure how to proceed?

Thanks for your help

Bade

Did you miss a leading ‘/’ in the definition of TEGRA_BASE? I noticed bash reports “usr1/nvidia/…” rather than “/usr1/nvidia/…”, although without knowing your system I can’t be sure of the paths. I would try accessing the path exactly as reported by bash using ls to see if the path is correct and adjust the definition of TEGRA_BASE accordingly. For example:

ls $TEGRA_BASE/bootloader/t186ref/BCT

On my system, I have:

export TEGRA_BASE=~/nvidia/nvidia_sdk/JetPack_4.2_Linux_P3310/Linux_for_Tegra

I used the leading ~ to reference my home directory. For my directory structure, this is akin to:

export TEGRA_BASE=/home/myuserid/nvidia/nvidia_sdk/JetPack_4.2_Linux_P3310/Linux_for_Tegra

where ~ corresponds to /home/myuserid.

Hi Dan,
Thanks. You are absolutely right.It was the path. I have got the command working now. Many thanks once again for your help.

Regards
Bade