Xid 31 error when running two CudaGraph captured ExecutionContexts concurrently

This issue is not strictly related to Orin Nano, but impacts our ability to test systems which we deploy to Orin Nano. We were asked to create a post on the Jetson forum after contacting Nvidia.

For details see the following links:

Hi,

Jetson doesn’t support concurrent memory access.
If the error is triggered when concurrent access, it is expected.

You can find below for more Jetson-specific usage:

Thanks.

Are you referring to MPS? If so, that’s unrelated to the issue. Maybe you’re referring to concurrent access to unified memory? Also not relevant.

Please note this issue does not impact us on our Jetson based platform (because we currently only have a single process using GPU compute). But we have CI servers which we use to test code which runs on our Jetson platform. And we would like to be able to run many GPU compute processes on these servers.

Yes, I recognize this is a somewhat tenuous connection to Jetson, but our contact at Nvidia asked us to post the issue to this forum in order for it to be tracked internally.

Hi,

To check it further, could you share a reproducible source with us?
The link you shared has a precompiled TensorRT plan, is it compiled for Orin Nano?

Thanks.

Hi AastaLLL, the steps for reproduction are available here. The steps are for a desktop system. We do have these plans compiled for Orin Nano, but the plan we’ve provided is built for desktop (for CI/CD regression testing purposes).

We have confirmation that the issue is with CUDA. What we’re looking for is a workaround.

There is no update from you for a period, assuming this is not an issue anymore.
Hence, we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

Sorry for the late update.
We will try to reproduce this issue in our environment.

Do you mean the model0.plan and model1.plan are compiled on Orin Nano with JetPack 6.0?
So we can use it without re-converting the engine?

Thanks.