Conversion ResNet50.onnx to TensorRT Engine on Jetson Nano


I need ResNet50 without top_layer to only feature extraction purpose. I create ResNet50 model (include_top=false) in keras and then export it to ONNX format. All steps are done regards to this tutorial Nvidia TensorRT Notebook

Then I move model to the Jetson Nano and do the engine creation:

/usr/src/tensorrt/bin/trtexec --onnx=/home/jetson/model/resnet50_without_top.onnx --saveEngine=/home/jetson/model/resnet_engine.trt --explicitBatch  --workspace=1024`

but when I do the prediction on jetson nano I get different result then from PC version.


TensorRT Version:
Jetson Version: nvidia-jetpack Version: 4.4-b144
Operating System + Version: Jetson and Fedora on PC
Python Version (if applicable): 3.7
TensorFlow Version (if applicable): keras

Solved. I need to set up input shape as below:

/usr/src/tensorrt/bin/trtexec --onnx=/home/jetson/cash15/resnet50_without_top.onnx --saveEngine=/home/jetson/cash15/resnet_engine.trt --explicitBatch --shapes=input_1:1x224x224x3

before this I always get warning:

[06/22/2021-14:57:54] [W] Dynamic dimensions required for input: input_1, but no shapes were provided. Automatically overriding shape to: 1x1x1x3

What is courious that network get image with shape (1, 224, 224, 3) on input and works (with wrong output data).

Best regards.


Please add the dynamic shape information to trtexec as well.
For example:

$ /usr/src/tensorrt/bin/trtexec --minShapes=input_1:1x224x224x3 --optShapes=input_1:1x224x224x3 --maxShapes=input_1:1x224x224x3 ...


This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.