ADIS16475 - SPI CE Signal Goes High When It Should Stay Low to Allow the Chip to Respond

I am trying to get adis16475 to work with Jetson Orin NX 16GB and Jetpack 36.4.3.

Below are the steps I have followed so far, and I am getting very strange behaviour. CE disables the chip when the clock is provided to it, so it can’t send the response.

Step by step:

I first generated the pinmux configuration files with the spreadsheet from the documentation (I know that those have overlay, even though they are not overlays):

Orin-obc_pinmux_overlay_v1.0-gpio-default.txt (3.4 KB)
Orin-obc_pinmux_overlay_v1.0-padvoltage-default.txt (2.4 KB)
Orin-obc_pinmux_overlay_v1.0-pinmux.txt (65.8 KB)
(attached as txt, bc the website only allows specific extensions).

Then I decided to modify this board config file:
p3768-0000-p3767-0000-a0.conf
instead of making a new one, because I already had issues before with the flashing process not working. The command from the quick start guide is completely wrong, and the partition configuration XML files are not in the locations it gives:

$ sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device nvme0n1p1
-c tools/kernel_flash/flash_l4t_t234_nvme.xml -p “-c bootloader/generic/cfg/flash_t234_qspi.xml”
–showlogs --network usb0 jetson-orin-nano-devkit internal

I found out that the most reliable and simplest way to flash was to use this:
sudo ./nvsdkmanager_flash.sh --storage nvme0n1p1

but afaik it does not allow custom board configs, instead it figures it out automatically, so I changed this file:

> 	p3768-0000-p3767-0000-a0.conf
  with the following changes
  56,57c56,57
  < 			PINMUX_CONFIG="Orin-obc_pinmux_overlay_v1.0-pinmux.dtsi";
  < 			PMC_CONFIG="Orin-obc_pinmux_overlay_v1.0-padvoltage-default.dtsi";
  ---
  > 			PINMUX_CONFIG="tegra234-mb1-bct-pinmux-p3767-dp-a01.dtsi";
  > 			PMC_CONFIG="tegra234-mb1-bct-padvoltage-p3767-dp-a01.dtsi";
  99,100c99,100
  < PINMUX_CONFIG="Orin-obc_pinmux_overlay_v1.0-pinmux.dtsi";
  < PMC_CONFIG="Orin-obc_pinmux_overlay_v1.0-padvoltage-default.dtsi";
  ---
  > PINMUX_CONFIG="tegra234-mb1-bct-pinmux-p3767-dp-a03.dtsi";
  > PMC_CONFIG="tegra234-mb1-bct-padvoltage-p3767-dp-a03.dtsi";

And I replace those files with the gpio one from the spreadsheet:
bootloader/tegra234-mb1-bct-gpio-p3767-dp-a03.dtsi
bootloader/tegra234-mb1-bct-gpio-p3767-hdmi-a03.dtsi

After flashing, I was able to verify that the new configuration was applied with gpioinfo, however, the SPI interfaces would not work at all. By following this forum thread, I removed the lines of code that assign the gpio pins, as can already be seen in the config attached above. I also changed all the relevant entries containing:
nvidia,function = “rsvd1”;
to this:
nvidia,function = “spi1”;
In the pinmux file, but still nothing. All of the pins dead, I used both the loopback test and spi-pipe from the spi-tools package.

Then, as a last resort, I used the jetson-io.py tool to reconfigure the header, and it worked for both of the interfaces. When I inspected the files that it generated:

40pin_ovly.dts.txt (2.9 KB)
tegra234-40pin.dts.txt (6.1 KB)

It seems that those overlays are basically doing the same thing as the configs from the spreadsheet, but with macros, and somehow this works. For clarity, I was doing testing here with the default kernel device tree, so I had the tegra-spidevdriver assigned to the ports. I would love to learn what the difference is and how to make my approach work, but that’s a side issue now, since it works. Perhaps the parts of the tree that define the macros are also overlaying the pin settings?

Going back to the ADIS driver, the next step was to create the device tree config. Following this thread, I modified the default tree and connected the adis driver. I also have a SPI - CAN interface there, which also doesn’t work but that might be a hardware issue:

tegra234-p3768-0000+p3767-xxxx-nv-common.dtsi.txt (7.7 KB)

After that I compiled the kernel modules using the kernel source coude from the sources provided by Nvidia. This is my menuconfig:

	For the IMU (ADIS16475):								The default state in the
	Device Drivers  --->									kernel provided by Nvidia:
	<*>     Industrial I/O support --->								<*>
		--- Industrial I/O support
		[*]   Enable buffer support within IIO						<*>
		-*-     Industrial I/O buffering based on kfifo				<M>
		-*-   Enable triggered sampling support						<M>
		[--snip--]
		Inertial measurement units  ---> 
			[--snip--]
			<*> Analog Devices ADIS16475 and similar IMU driver		< > (chng. to <M>)
			[--snip--]

The drivers compiled sucessfully and they load fine, but once I modprobe it I get very strange timings. Chip enable goes high when the clock for the IMU’s response is provided:

I tried adjusting those parameters, but it didn’t help:
nvidia,cs-setup-clk-count = <0x1e>;
nvidia,cs-hold-clk-count = <0x1e>;
nvidia,enable-hw-based-cs;

I found this post that suggests that the driver itself might be an issue causing problems with timings. There are other posts that also lead to the same conclusion:

Should I investigate this further or is there something else that I could be missing? I would greatly appreciate any help.

Also, what to hell?

Hi wiktort52,

Are you using the devkit or custom board for Orin NX?

Could you refer to ADIS16465 SPI Driver on Jetson AGX Orin Jetpack6.2 - #8 by KevinFFF to port ADIS16475 IMU module?

There might be clock related issue.

Hi,

Yeah, you are right. I forgot about that. I also changed the clock rate:

cd /sys/kernel/debug/bpmp/debug/clk/spi1
echo “clk_m” > parent
echo “1000000” > rate

and that was indeed an issue since the IMU can do a max of 2MHz, but the min of the faster clock source is ~3.3Mhz. However, that still did not solve the timing issue. In fact, the capture from my post above already has the clock set to 1Mhz.

The capture above is from a dev board without the IMU. I also did testing on our target board with the IMU, and I got the same results. The IMU seems to be recording the initial command, which is a self-test, because it stops sending data-ready signals on a separate pin, but because CE stays high, it never responds on the data lines to the command.

@KevinFFF ?

Alright, if the problem isn’t connected to the clock source, let’s begin again from the very start.

Do you mean SPI loopback test not working?

If you are using the devkit(p3768) for Orin NX, you can simply use Jetson-IO to configure the pin instead of using pinmux spreadsheet to generate pinmux/gpio dtsi.

@KevinFFF, please read my post.

Then, as a last resort, I used the jetson-io.py tool to reconfigure the header, and it worked for both of the interfaces. When I inspected the files that it generated

It seems that those overlays are basically doing the same thing as the configs from the spreadsheet, but with macros, and somehow this works

I would love to learn what the difference is and how to make my approach work

If anything is unclear, then please ask me about it. I will be more than happy to clarify. Pinmux is a side issue, I suspect that making a new board config might solve it. The main problem are timings being all wrong on CE.

Sorry for missing them.

They should do the similar things for pin configurations.
The pinmux/gpio dtsi are loaded in early-boot(MB1) while Jetson-IO configure the kernel device to make those pin configured by tegra pinctrl driver. They are configured in different stage, but both should work.

It’s fine since Jetson-IO works for SPI loopback test in your case.

Do you mean the CS behavior is nor expected?
If so, please apply the patch from Jetson orin nano SPI Speed not changing - #9 by KevinFFF to check if it could help.

1 Like

If Jetson IO only touches the kernel, then I am gonna just hardcode the configuration into the device tree since that’s probably what’s not applied properly during the flashing process. gpioinfo probably just pulls the values straight from the registers.

That did it, thx @KevinFFF.

I saw the setting for hardware chip enable:
nvidia,enable-hw-based-cs;
but when I tried to apply it, nothing changed. Looking at the code, for some reason, the tegra driver allows that only for a single transfer. I also tried updating this driver to version from the kernel 6.3, but that also didn’t help. In the newer kernels, after that, the driver changed a lot and started to depend on some function calls from libgpio that do not exist in 5.15. Maybe that has something to do with the problem here?

You may add debug messages in driver to check why nvidia,enable-hw-based-cs; not work or customize them for your use case.

Please apply the current patch to upstream K6.3 driver to check if it could help.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.