How to modify NvSciIpc (INTER_CHIP, PCIe) Channel Properties

Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.10.0
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
2.1.0
other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Issue Description
Hi,

I am trying to establish C2C PCIe connection between 2 Orins in order to share images directly in CUDA memory. I am following this webpage to set up the C2C connection between the 2 Orins. I was able to establish the connection and verified by running this the dwcgf example:

Now, I would like to modify the INTERCHIP, PCIe channel properties according to this section. I can only find this file:

<PDK_TOP>/drive-linux/kernel/source/hardware/nvidia/platform/t23x/automotive/kernel-dts/p3710/common/tegra234-p3710-0010-nvscic2c-pcie.dtsi

inside my flashing docker container at:

/drive/drive-linux/kernel/source/hardware/nvidia/platform/t23x/automotive/kernel-dts/p3710/common/tegra234-p3710-0010-nvscic2c-pcie.dtsi

I changed the frame numbers in file, leaving me with this configuration:

/ {
	/*
	 * SoC:0 PCIe RP C5 (upstream) -> SoC:0 PCIe EP C6 (downstream)
	 * @nvscic2c-pcie-s0-c5-epc and @nvscic2c-pcie-s0-c6-epf must be
	 * updated together.
	 */
	nvscic2c-pcie-s0-c5-epc {
		compatible = "nvidia,tegra-nvscic2c-pcie-epc";
		status = "disabled";

		nvidia,host1x = <&host1x>;
		nvidia,pcie-edma = <&pcie_c5_rp>;
		nvidia,pci-dev-id = <0x22CC>;

		/* <local, peer> */
		nvidia,board-id = <0 0>;
		nvidia,soc-id = <0 0>;
		nvidia,cntrlr-id = <5 6>;

		/*
		 * prefix<nvscic2c_pcie>_s<LocalSoCId>_c<LocalCntrlrId>_
		 * <EndpointId>
		 */
		nvidia,endpoint-db =
		"nvscic2c_pcie_s0_c5_1,   64,  00032768,  26001",
		"nvscic2c_pcie_s0_c5_2,   64,  00032768,  26002",
		"nvscic2c_pcie_s0_c5_3,   64,  00032768,  26003",
		"nvscic2c_pcie_s0_c5_4,   64,  00032768,  26004",
		"nvscic2c_pcie_s0_c5_5,   64,  00032768,  26005",
		"nvscic2c_pcie_s0_c5_6,   64,  00032768,  26006",
		"nvscic2c_pcie_s0_c5_7,   64,  00032768,  26007",
		"nvscic2c_pcie_s0_c5_8,   64,  00032768,  26008",
		"nvscic2c_pcie_s0_c5_9,   64,  00032768,  26009",
		"nvscic2c_pcie_s0_c5_10,  64,  00032768,  26010",
		"nvscic2c_pcie_s0_c5_11,  64,  00032768,  26011",
		"nvscic2c_pcie_s0_c5_12,  16,  00000064,  26012";
	};

	/*
	 * SoC:0/1 PCIe EP C6 (downstream) -> SoC:0 PCIe RP C5 (upstream)
	 * @nvscic2c-pcie-s0-c6-epf and @nvscic2c-pcie-s0-c5-epc must be
	 * updated together.
	 */
	nvscic2c-pcie-s0-c6-epf {
		compatible = "nvidia,tegra-nvscic2c-pcie-epf";
		status = "disabled";

		nvidia,host1x = <&host1x>;
		nvidia,pcie-edma = <&pcie_c6_ep>;
		nvidia,pci-dev-id = <0x22CC>;

		/* <local, peer> */
		nvidia,board-id = <0 0>;
		nvidia,soc-id = <0 0>;
		nvidia,cntrlr-id = <6 5>;

		/* BAR window size.*/
		nvidia,bar-win-size = <0x40000000>;

		/*
		 * prefix<nvscic2c_pcie>_s<LocalSoCId>_c<LocalCntrlrId>_
		 * <EndpointId>
		 */
		nvidia,endpoint-db =
		"nvscic2c_pcie_s0_c6_1,   64,  00032768,  26101",
		"nvscic2c_pcie_s0_c6_2,   64,  00032768,  26102",
		"nvscic2c_pcie_s0_c6_3,   64,  00032768,  26103",
		"nvscic2c_pcie_s0_c6_4,   64,  00032768,  26104",
		"nvscic2c_pcie_s0_c6_5,   64,  00032768,  26105",
		"nvscic2c_pcie_s0_c6_6,   64,  00032768,  26106",
		"nvscic2c_pcie_s0_c6_7,   64,  00032768,  26107",
		"nvscic2c_pcie_s0_c6_8,   64,  00032768,  26108",
		"nvscic2c_pcie_s0_c6_9,   64,  00032768,  26109",
		"nvscic2c_pcie_s0_c6_10,  64,  00032768,  26110",
		"nvscic2c_pcie_s0_c6_11,  64,  00032768,  26111",
		"nvscic2c_pcie_s0_c6_12,  16,  00000064,  26112";
	};
};

I then proceed to flash my device normally using the command:

/drive/./flash.py /dev/ttyACM1 p3710 --clean

Will my modifications be flashed onto the target? Will the kernel modules for C2C PCIe recompile accordingly? Is there a way to verify my changes are successful after flashing? Thanks!

Error String
None

Logs
None

Dear @extern.ray.xie ,

Please see `cat /proc/device-tree/nvscic2c-pcie-s0-c6-epf/nvidia,endpoint-db` to check if the changes are reflected on target?

Hi Siva,

Thank you for the tip. I will check again on Monday. However, is my method of modification (changing only the configuration file tegra234-p3710-0010-nvscic2c-pcie.dtsi in the flashing container) correct? Thanks!

(changing only the configuration file tegra234-p3710-0010-nvscic2c-pcie.dtsi in the flashing container) correct?

>> Yes. changes looks good. If `flash.py` does not help to update, You need to run bind and flash separately. Let me know your observations.

Hi Siva,

I was able to verify the changes using your method. Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.