NvSciIpc INTER_CHIP hardware requirements

The documentation for JetPack 5.0.2 describes new support for inter-process communication using NvSci. The NvSci documentation makes reference to IPC between processes on separate SoMs via the INTER_CHIP backend. What are the hardware requirements to enable NvSci communication between two AGX Orin SoMs? Can it be implemented with the built-in PCIe endpoint, or is a separate PCIe switch implementing a non-transparent bridge required?

We are checking if this is enabled for Orin. Will update.

Have checked and confirmed we don’t support this function on Jetson platforms. As of now there is no plan to support it in near future.

Hi DaneLLL,

On the release notes page for Jetson Linux 34.1, there is a link to the NVSCI documentation. In this document on page 91, there are references to the INTER_CHIP backend. On page 107, there is a section entitled “Differences for NvStreams and NvSciIpc on Jetson Linux”, and while it states clearly that the inter-VM backend is not supported, there is no mention of the inter-chip backend.

If I understand what you’re saying, this document is incorrect and the INTER_CHIP backend will not be supported on any L4T/Jetson Linux device. If that interpretation is correct, what is the recommended mechanism for implementing inter-chip IPC on Jetson Linux? Is it recommended to implement a custom protocol based on the PCIe endpoint support?

Are there any other functionality differences for NvSci on Jetson Linux that developers should be aware of?

The document may not clearly describe the feature is not supported on L4T releases. We will update the document.

So in your use-case, you would like to pass CUDA buffers from one Orin to the other Orin. Is it correct?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.