Error in optimising SSD Inception model with Tensor RT for custom resolution

I am using Jetson AGX Xavier with Jetpack 4.2.1

I have not altered Tensor RT, UFF and graphsurgeon version. They are as it is.

I have retrained SSD Inception v2 model on custom 600x600 images.

Taken pretrained model from here.

I have changed height and width to 600x600 in pipeline.config.

I am using sampleUffSSD sample containing in Tensor RT samples.

In I replaced 300 by 600 in shape.

I generated frozen_graph.uff by command : python3 frozen_inference_graph.pb -O NMS -p

In file BatchStreamPPM.h:

I changed

static constexpr int INPUT_H = 600; // replaced 300 by 600
static constexpr int INPUT_W = 600; // replaced 300 by 600
mDims = nvinfer1::DimsNCHW{batchSize, 3, 600, 600}; // replaced 300 by 600

In file sampleUffSSD.cpp

I changed

parser->registerInput("Input", DimsCHW(3, 600,600), UffInputOrder::kNCHW); // replaced 300 by 600
cd sampleUffSSD

make clean ; make

I ran sample_uff_ssd I met below error:

&&&& RUNNING TensorRT.sample_uff_ssd # ./../../bin/sample_uff_ssd [I] ../../data/ssd/sample_ssd_relu6.uff [I] Begin parsing model... [I] End parsing model... [I] Begin building engine... sample_uff_ssd: nmsPlugin.cpp:139: virtual void nvinfer1::plugin::DetectionOutput::configureWithFormat(const nvinfer1::Dims*, int, const nvinfer1::Dims*, int, nvinfer1::DataType, nvinfer1::PluginFormat, int): Assertion `numPriors * numLocClasses * 4 == inputDims[param.inputOrder[0]].d[0]' failed. Aborted (core dumped)

I think the problem is with resolution.

How can I optimise model for custom resolution ?

It works fine with 300x300 resolution.


The simplest approach is to add a scaler before the inference to downscale image from 600x600 into 300x300.
This will preserve the same model architecture.

If you want to expand the model input into 600x600.
It’s recommended to check our README first: /usr/src/tensorrt/samples/sampleUffSSD/
You will need to update the plugin file for the customized input dimension.


Hello AastaLLL,

I get error in building engine.

As you can see in my error log.

Engine starts building but it does not completes successfully.

Your statement [b]"The simplest approach is to add a scaler before the inference to downscale image from

600x600 into 300x300."[/b] will apply once the engine is built. Becuase I get error in building engine



Sorry that my previous statement may not be clear enough.

Do you train your model with the 600x600 input image?
If not, you can just use the 300x300 model and add an extra scaler to downscale the image.

An 300x300 model can be converted into TensorRT engine without updates.
The scaler should be independent of the TensorRT model like openCV or other image processing library.


I train model with 600x600 input images.

What I need to do ?


You will need to update the model architecture which is defined in the

Would you mind to share your model with us so we can check it for you?
A .pb file will be good.



Please find frozen graph pb file here.

Model was trained on 600x600 images.


The link you shared in comment #7 is somehow broken.
Could you help to check it again?