Compile onnx model via trtexec Jetpack 5.0.2

i’m facing an issue converting one onnx model with dynamic shapes into a trt/engine file on jetson xavier nx.
I’m using this command:

/usr/src/tensorrt/bin/trtexec --onnx=modellodasaved.onnx --saveEngine=m1t1_engine.trt --shapes=image_input:-1x-1x-1x3,template_input:-1x-1x-1x3 --minShapes=image_input:1x1x1x3,template_input:1x1x1x3 --maxShapes=image_input:1x800x600x3,template_input:1x600x400x3 --optShapes=image_input:1x800x600x3,template_input:1x600x400x3 --verbose

the final error is:

[09/23/2022-12:21:31] [V] [TRT] *************** Autotuning Reformat: Float((* (# 2 E2) (+ (# 1 (RESHAPE 1 E3 E4 320 | 1 (* E3 E4) -1 zeroIsPlaceholder)) (# 1 E2))),(# 2 E2),1) where E0=(+ (CEIL_DIV (+ (# 1 (SHAPE image_input)) -7) 4) 1) E1=(+ (CEIL_DIV (+ (# 2 (SHAPE image_input)) -7) 4) 1) E2=(RESHAPE 1 E0 E1 320 | 1 (* E0 E1) -1 zeroIsPlaceholder) E3=(+ (CEIL_DIV (+ (# 1 (SHAPE template_input)) -7) 4) 1) E4=(+ (CEIL_DIV (+ (# 2 (SHAPE template_input)) -7) 4) 1) → Float(1,(# 2 (RESHAPE 1 E0 E1 320 | 1 (* E0 E1) -1 zeroIsPlaceholder)),1) where E0=(+ (CEIL_DIV (+ (# 1 (SHAPE image_input)) -7) 4) 1) E1=(+ (CEIL_DIV (+ (# 2 (SHAPE image_input)) -7) 4) 1) ***************
[09/23/2022-12:21:31] [V] [TRT] Deleting timing cache: 60 entries, served 14 hits since creation.
[09/23/2022-12:21:31] [E] Error[2]: [blockChooser.cpp::getRegionBlockSize::666] Error Code 2: Internal Error (Assertion memSize >= 0 failed. )
[09/23/2022-12:21:31] [E] Error[2]: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. )

Thanks in advice for your kind help.

modellodasaved.onnx (8.9 MB)


Move your topic to the Xavier NX board based on your description.

We can reproduce this error internally.
Have you run the model with other frameworks, like ONNXRuntime, before?


We haven’t tried, let’s try with the onnx runtime.
Up until now we have used the .pb model (tensorflow saved model) and it works quite well on the jetson but it takes a long time to load.
Thanks we look forward to your news

Hi AastaLLL,
we compiled the model with fixed size (both for image_input and template_input). In this manner all the pipe (pb → onnx → trt) works.
But the problem is that we need at least 30 models ready and right now it is possible to make ready to use 15 models (for reasonable memory limit). If you’ll able to trnsform in trt the model with dynamic shapes ready it will be fantastic.
Any news?


We can give it a check.

How do you compile the model with a fixed size?
Do you rewrite the input information of the ONNX model? And which dimension do you use for generating?


First of all we thank you for your help.
We compile the model with fixed dimensions from tensorflow, setting the two input layers with fixed dimensions and then regenerate the onnx model.
As already mentioned, the dimensions are variable, however there are two images, one larger (maximum size 800x600x3) and one smaller (maximum size 200x200x3)


Does rescaling the input image an option for you?
Some of our users choose to resize the images and use a fixed input model.


Thanks for the advice.
We tried scaling the input but doing so the performance (accuracy in the detection) worsens.
We are currently using the ploy of generating a maximum of 6 models with the most representative input sizes.
Surely having only one model with the dynamic shape would have been great!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.