Supported layer when converting models on the jetson Orin NX

Hi,

I have a question about the supported layer when I convert onnx model to tensorrt (engine) on the jetson Orin NX.

My condition is as bellow.

Jetpack version : 5.1.2
TensorRT version : 5.2.2
Model : yolov7

I added the ConvTranspose layer (for Upsampling) to the public yolov7 model.
The original yolov7 model retains inference performance when converted from onnx to TRT to FP16.
However, the inference performance of the model added the ConvTranspose layer drops significantly after converting to FP16.

So my question is, does the current Jetpack version not support conversion about ConvTranspose layer?

Please reply
Thanks,

Hi,

You can find the support matrix of TensorRT 8.5 below:

Please note that there are several newer TensorRT releases available for Orin NX.
You can try to upgrade the software version to see if it helps.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.