faster rcnn with diffrerent scale

Hi,

I am using the sample of faster rcnn and I want to forward different resolution of images in one context.

IRuntime* runtime = createInferRuntime(gLogger);
  ICudaEngine* engine = runtime->deserializeCudaEngine(gieModelStream->data(), gieModelStream->size(), &pluginFactory);
  IExecutionContext *context = engine->createExecutionContext();

// for each image, do inference with the context

However, the input dims specified in prototxt is fixed (e.g., 1 * 3 * 375 * 500).

If the resolution of the image is different with the input dims, then the program
will failed.

when using caffe, I can reshape the input layer before forwarding.

I am not sure how to do this in TensorRT.

thanks.

Hi,

Please downscale the image into the correct resolution first.

You can check this sample:
[url]https://github.com/dusty-nv/jetson-inference/blob/master/imageNet.cu#L97[/url]
It resizes the image and subtract the mean value at the same time

Thanks.

sorry.

What I mean is that, is it possible to change the input blob size (CHW) after calling createExecutionContext?

No.

When creating the TensorRT engine, the output dimension is chosen for optimized performance.
TensorRT doesn’t support the dimension change on the fly.

Hi, is there any future plans to support dimension changing of layers in TensorRT?

We’re looking at the possibility but I can’t provide any firm commitments or estimates at this time.