Can I use c++ torch, tensorrt in Jetson Xavier at the same time?

* Jetpack:        4.3 [L4T 32.3.1]
* Type:           AGX Xavier
* Name:           NVIDIA Jetson AGX Xavier
* GPU-Arch:       7.2
* cuDNN:
* VisionWorks:
* OpenCV:         4.1.2 compiled CUDA: YES
* CUDA:           10.0.326
* TensorRT:

I get prebuild torch library in PyTorch for Jetson Nano - version 1.4.0 now available

And I want to use torch and TensorRT in C++, DeepStream at the same time.

Does it work? (When I tried, some error occurs without specific logs.)

Moving to Jetson Xavier forum so that Jetson team can take a look.


It can work.
But it’s recommended to separate the CUDA context of each framework.


Thanks for reply!

How can I separate CUDA context of each framework?

Could you give me some example?


You can check this issue for the example:

They create a new CUDA context for TensorRT to allow TensorRT and TensorFlow works together.