TensorRT conversion error for TAO RetinaNet model on Jetson Xavier NX

Hi, during model conversion from .onnx to tensorrt .plan, we are getting a conversion error for the TAO RetinaNet model on Jetson Xavier NX.

Conversion is done like this:
trtexec --onnx=model.onnx --saveEngine=model.plan

Log here tensorrt_conversion_log.txt (103.1 KB)

We are using the AAEON BOXER-8251AI (AAEON BOXER-8251AI | AI@Edge Fanless Embedded Box PC with NVIDIA Xavier NX | Microsoft Azure Certified - AAEON). Driver and tensorrt version were installed as below. The model was previously also run on other devices including a jetson orin, where we had no compilation issues. Can you help us with this? Thanks

Environment

TensorRT Version : tensorrt/now 8.4.1.5-1+cuda11.4, nvidia-tensorrt/now 5.0.2-b231
GPU Type : NVIDIA® Jetson Xavier™ NX
Nvidia Driver Version : nvidia-jetpack/now 5.0.2-b231
CUDA Version : cuda-11-4/now 11.4.14-1, nvidia-cuda/now 5.0.2-b231
CUDNN Version : libcudnn8/now 8.4.1.50-1+cuda11.4, nvidia-cudnn8/now 5.0.2-b231
Operating System + Version : Ubuntu 20.04.6 LTS
Python Version (if applicable) :
TensorFlow Version (if applicable) : not installed
PyTorch Version (if applicable) : not installed
Baremetal or Container (if container which image + tag) : baremetal

Dear @harryhirsch,
Could you check ONNX opset. Please see TensorRT Parsing ONNX Model Error - #9 by philminhnguyen helps.

I am getting this error when checking the model.


ValidationError Traceback (most recent call last)
/tmp/ipykernel_225078/4085541278.py in
3
4 # model = onnx.load(filename)
----> 5 onnx.checker.check_model(“model.onnx”)

/opt/conda/lib/python3.7/site-packages/onnx/checker.py in check_model(model, full_check)
123 # If model is a path instead of ModelProto
124 if isinstance(model, str):
→ 125 C.check_model_path(model, full_check)
126 else:
127 protobuf_string = (

ValidationError: Field ‘shape’ of ‘type’ is required but missing.

I am also not able to load the model in onnxruntime with TRT execution provider as described here Error when running retinanet model in onnxruntime with tensorrt execution accelerator

However, I am able to convert the model to tensorrt on multiple different platforms including an A5000 GPU, Jetson Orin, so I think it must be something to do with the Jetson Xavier NX.

Dear @harryhirsch,
Could you share the ONNX model?

Hi, I send the model to you in a chat since it’s trained on our data.

Dear @harryhirsch ,

It looks like issue with Jetpack version. I could build TRT model using trtexec on jetpack 5.1.2. Could you check with latest Jetpack release for Xavier NX?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.