Smaller input image size

Hi,

I am doing image inference with TensorRT and I build my engine using a max input size and an imageNet similar to dusty-nv’s (except I do not resize input image before inference, I can’t).

It is doing fine when the image size is the same as max input size, but when I give it a smaller input size, I can not figure out how the data is stored in the output buffer.

Let’s say my max input size is 1x2048x2048. The output size is 6x247x247. No problem here I just reshape the output vector with its expected shape.

But if my image is 1x1024x1024 sized, the expected output size is 6x119x119, but the intuitive reshape (numpy one) does not give the expected results (I am comparing the results with a pytorch execution) and I do not see the correct values anywhere in the vector.

It might be related to the input image format. My model eats a reshaped numpy array of one dimension (.reshape(-1)). But when the image size is smaller, maybe I need to store it in the input buffer another way.

Do anyone has info on data storage in buffers when the input image is smaller than the size the engine was build with?

Edit: it seems that filling smaller image with zeros to fit in max size tensor is enough to fool tensorRT. Results seem ok so far.