TLT to TensorRT and input size

Hello,

I have trained and exported a FasterRCNN model with TLT.
Then, I use tlt-converter to create a TensorRT engine for my C++ program.
When using tlt-converter, I have to specify the input image size through the -d parameter.
If I understood well, the input size is therefore a fixed size in pixels.

One property of FasterRCNN is that the model can be feed with images of different input sizes (thanks to RoiPooling).
This point is very convenient since it enables to resize images by keeping their aspect ratio.
For example, in Tensorflow, this process can be made with the “keep aspect ratio” resizer, at training and testing time.
In that case, the input of the network is not a fixed input size in pixels, but some “min” and “max” side dimensions.

Then, for any input image of any size, we can choose the resizing dimensions so that the aspect ratio of the original image is preserved, without warpping the objects.

How can I use this property of FasterRCNN with TLT / TRT ?

I have asked a similar question (without TLT) in
https://devtalk.nvidia.com/default/topic/1070321/tensorrt/faster-rcnn-and-variable-input-size/post/5422427/#5422427
but I didn’t solve the problem yet.

Hi dbrazey,
When you generate trt engine with tlt-converter, “-d” is the required argument. See tlt user guide, it is comma-separated list of input dimensions that should match the dimensions used for tlt-export.

Hello,

Thanks for your answer.

It is exactly what I did.
My question is, that I would need to use the TRT engine, generated from the etlt, with some various input sizes.
Is it possible to specify to tlt-converter that I need a TRT engine with dynamic input ?

Seems to be not possible.
When generate trt engine, tlt-converter cannot accept dynamic input setting in the command.