I have a question about the supported layer when I convert onnx model to tensorrt (engine) on the jetson Orin NX.
My condition is as bellow.
Jetpack version : 5.1.2
TensorRT version : 5.2.2
Model : yolov7
I added the ConvTranspose layer (for Upsampling) to the public yolov7 model.
The original yolov7 model retains inference performance when converted from onnx to TRT to FP16.
However, the inference performance of the model added the ConvTranspose layer drops significantly after converting to FP16.
So my question is, does the current Jetpack version not support conversion about ConvTranspose layer?