Can I use c++ torch, tensorrt in Jetson Xavier at the same time?

* Jetpack:        4.3 [L4T 32.3.1]
* Type:           AGX Xavier
* Name:           NVIDIA Jetson AGX Xavier
* GPU-Arch:       7.2
* cuDNN:          7.6.3.28-1+cuda10.0
* VisionWorks:    1.6.0.500n
* OpenCV:         4.1.2 compiled CUDA: YES
* CUDA:           10.0.326
* TensorRT:       6.0.1.10-1+cuda10.0

I get prebuild torch library in PyTorch for Jetson Nano - version 1.4.0 now available

And I want to use torch and TensorRT in C++, DeepStream at the same time.

Does it work? (When I tried, some error occurs without specific logs.)

Moving to Jetson Xavier forum so that Jetson team can take a look.

Hi,

It can work.
But it’s recommended to separate the CUDA context of each framework.

Thanks.

Thanks for reply!

How can I separate CUDA context of each framework?

Could you give me some example?

Hi,

You can check this issue for the example:

They create a new CUDA context for TensorRT to allow TensorRT and TensorFlow works together.

Thanks.