Dynamically determine which pinmux and pmc to use when flashing P3668-0000 SOM with Jetpack 4.6.1

I am using the P3668-0000 SOM with a custom carrier board. I have configured my pinmux and produced the appropriate dtsi and cfg files and the board is up and running.

I have a second revision of hardware coming which is going to swap the use of a few pins from inputs to outputs and I now need to support both of these configurations. The DTB between the two is identical, only the cfg file is changed due to pins changing from inputs to outputs or vice versa. I have generated a second set of dtsi and cfg files for this new configuration.

I want to be able to maintain a single image which is aware of both of these configurations, and do the following:

  1. When the hardware is initially provisioned I will to specify which hardware configuration is being flashed and use the appropriate cfg files for that configuration
  2. After initial provisioning, future loads should programatically detect which configuration the hardware is in, and use the appropriate cfg file for that configuration

Is this something that is possible?

I head read Jetson Xavier NX Series Adaptation and Bring-Up and the Jetson Xavier NX Series and Jetson AGX Xavier Series Boot Flow (it will only let me put one link in due to me being a new user) and am not having luck understanding what the right process for this is.

It is a function that is not in use as flash tools do not read the eeprom on carrier board in our default flash procedure.

You can try to give command “dump eeprom baseinfo <cvb.bin>” to tegraflash.py and see if that works.

Also, another question is whether your carrier board has the eeprom to read or not.

Thank you. The carrier board does have eeprom. I will attempt the tegraflash.py procedure tomorrow and report what results I get. I am also open to other approaches to this if nvidia has a different recommended approach to this. Ultimately the problem I am trying to solve is having two different gpio pin configurations for different hardware sets, but to not have to distribute two different installations and instead have a common installation for both hardware configurations, where I can detect at flash time which configuration to load.

I don’t think you are able to do that in software as you are putting the board into recovery mode and no software is able to control gpio at that moment.

I don’t think I have asked my question well, I am not trying to operate the gpio while in recovery mode. Let me try and explain a bit different.

I have to sets of device tree files
Set 1:
tegra19x-myboard-rev1-gpio-default.dtsi
tegra19x-myboard-rev1-padvoltage-default.dtsi
tegra19x-myboard-rev1-pinmux.dtsi

Set2:
tegra19x-myboard-rev2-gpio-default.dtsi
tegra19x-myboard-rev2-padvoltage-default.dtsi
tegra19x-myboard-rev2-pinmux.dtsi

The only difference between these two configurations is that in rev1, pin 130 (GPIO3_PCC.03) is defined as an input, and pin rev2, the same pin is defined as an output.

Because this is the only change, the device tree blob generated when using these two different dtsi inputs is the same. I have verified this using dtc to decompile and inspect them.

I have produced two different cfg files using pinmux-dts2cfg.py for these different gpio configurations, tegra19x-pinmux-myboard-rev1.cfg and tegra19x-pinmux-myboard-rev2.cfg

I have three questions:

  1. Is there a way to inspect a running system and determine which of these two configurations is currently in use on the system?
  2. How do I specify to the Nvidia tools which cfg to use when I flash a new image?

My objective is to not have to compile two different dtbs for these two configurations since the dtbs are functionally equivalent and only the configuration of the GPIOs is different. Instead I want to use the same dtb and specify at flash time which cfg file to use to configure the GPIO.