How to dynamic setup input size for a Caffe model?


My use case is like this:
Although the input dimension of a Caffe model’s deploy.prototxt is provided as NCHW=(1,3,1,1), the real input blob size will be given by application, e.g. size of an image.

The Caffe way of dealing this, is to simply init input blob as (1,1,1,1), and after the application acquires the image size (H,W), it uses Caffe’s reshape() method to adjust input blob size as (1,3,H,W) and further makes subsequent blobs adaptive to this change.

Is there anyway in TensorRT to do similar things?
If yes, any sample code available?

I’ve tried to set a DimsNCHW as (1,3,H,W), then use the setDimensions(dim) method for nvinfer1::ITensor to make the input blob to change size, but it seemed that the output blob couldn’t adjust properly.
As an evidence, if I query the output blob using getDimensions(), the result would become invalid. Actually the TensorRT network became unusable after setDimensions() on input blob.

My environment:
TensorRT 5RC on Win10 with Geforce 1080Ti.

thanks a lot!


TensorRT does not have adaptive input sizes yet. The user needs to specify input size at build time in order for TRT to optimize the network.