TensorRT conversion error for TAO RetinaNet model on Jetson Xavier NX

Hi, during model conversion from .onnx to tensorrt .plan, we are getting a conversion error for the TAO RetinaNet model on Jetson Xavier NX.

Conversion is done like this:
trtexec --onnx=model.onnx --saveEngine=model.plan

Log here tensorrt_conversion_log.txt (103.1 KB)

We are using the AAEON BOXER-8251AI (AAEON BOXER-8251AI | AI@Edge Fanless Embedded Box PC with NVIDIA Xavier NX | Microsoft Azure Certified - AAEON). Driver and tensorrt version were installed as below. The model was previously also run on other devices including a jetson orin, where we had no compilation issues. Can you help us with this? Thanks


TensorRT Version : tensorrt/now, nvidia-tensorrt/now 5.0.2-b231
GPU Type : NVIDIA® Jetson Xavier™ NX
Nvidia Driver Version : nvidia-jetpack/now 5.0.2-b231
CUDA Version : cuda-11-4/now 11.4.14-1, nvidia-cuda/now 5.0.2-b231
CUDNN Version : libcudnn8/now, nvidia-cudnn8/now 5.0.2-b231
Operating System + Version : Ubuntu 20.04.6 LTS
Python Version (if applicable) :
TensorFlow Version (if applicable) : not installed
PyTorch Version (if applicable) : not installed
Baremetal or Container (if container which image + tag) : baremetal

This looks like a Jetson issue, hence we request you to raise it on respective Forum.