Hello,
I have trained and exported a FasterRCNN model with TLT.
Then, I use tlt-converter to create a TensorRT engine for my C++ program.
When using tlt-converter, I have to specify the input image size through the -d parameter.
If I understood well, the input size is therefore a fixed size in pixels.
One property of FasterRCNN is that the model can be feed with images of different input sizes (thanks to RoiPooling).
This point is very convenient since it enables to resize images by keeping their aspect ratio.
For example, in Tensorflow, this process can be made with the “keep aspect ratio” resizer, at training and testing time.
In that case, the input of the network is not a fixed input size in pixels, but some “min” and “max” side dimensions.
Then, for any input image of any size, we can choose the resizing dimensions so that the aspect ratio of the original image is preserved, without warpping the objects.
How can I use this property of FasterRCNN with TLT / TRT ?
I have asked a similar question (without TLT) in
https://devtalk.nvidia.com/default/topic/1070321/tensorrt/faster-rcnn-and-variable-input-size/post/5422427/#5422427
but I didn’t solve the problem yet.