Issues with Jetson Xavier - TensorRT

We are optimising a Tensorflow_v1 frozen inference graph for TensorRT using the guide on https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html .

The optimized graph is built on the Jetson Nano. However it will not run on Xavier. Both the Xavier and the Nano use Jetpack 4.4.

How can we build a TensorRT graph compatible with all Jetson devices?

Really appreciate your help as this is making it really difficult for us to recommend to clients to use Jetson devices.

Hi,

Sorry that TensorRT engine is not portable.
When converting a model, TensorRT will choose an optimal algorithm based on device capacity and GPU layout.
This will limit the engine to use on the different devices.

It’s recommended to use the intermediate model (ex. onnx, uff, …) to share across the devices.
And generate the engine file directly on the target device at the first time launch.

Thanks.

Thanks, AastaLLL. Can you please elaborate on “use the intermediate model”?

Hi,

You can use onnx, uff, caffemodel as the sharing format.
Thanks.