about Space to Batch and Batch to Space operation


i am working on Space to Batch and Batch to Space plugin because in my model there are some Conv1d with dilation rate > 1 and TensorFlow will convert this layer into some operations as below

Conv1d with dilation rate > 1

is converted to

SpaceToBatch: reduce spatial size and increase batch size according to dilation rate
              (batchSize_1 is changed to batchSize_2)

ExpandsDims : insert one dimension for the following Conv2D

Conv2d      : performance normal 2d convolutional operation

Squeeze     : remove inserted dimension

BatchToSpace: reverse of SpaceToBatch

my question is either SpaceToBatch or BatchToSpace will change batch size but it seems i can not change batch size in the plugin ?

  1. For getOutputDimension(int index, const nvinfer1:Dims *inputs, int nbInputDims), can i change batch size here ?



The dimension in getOutputDimension() is only CHW.
Batch size can be found in this function:

void configureWithFormat(const Dims* inputDims, int nbInputs, const Dims* outputDims, int nbOutputs, DataType type, PluginFormat format, int <b>maxBatchSize</b>) override

Please noticed the value passed is maxBatchSize.
User can indicate any batchsize <= maxBatchSize when inference.


@AastaLLL Thanks.

let me confirm again

void configureWithFormat(const Dims* inputDims, int nbInputs, const Dims* outputDims, int nbOutputs, DataType type, PluginFormat format, int maxBatchSize) override
   1. <b>outputDims</b> here includes [ <b>batch size</b>, depth, height, width ] right ?
   2. I can modify <b>batch size</b> of <b>outputDims</b> to let the following layer to apply new batch size ?
   3. If i change <b>batch size</b> of <b>outputDims</b>, i need to make sure the value of <b>batch size</b> must be smaller than maxBatchSize ?

Thanks for any response.

Hi, @AastaLLL

i have checked configureWithFormat and it seems can’t modify batch size for the following layer.

Actually the problem is tensorflow converts Dilated Conv layer into SpaceToBatch+Conv+BatchToSpace and i found TensorRT supports dilated convolution operation (https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/classnvinfer1_1_1_i_convolution_layer.html).

so if i can replace tensorflow’s dilated Conv layer with TensorRT’s, i don’t need to implement SpaceToBatch, BatchToSpace etc.

so if i want to use TensorRT’s dilated convolutional operation,

  1. how do i replace the original dilated Conv node with TensorRT dilated convolutional operation?
gs.create_plugin_node(name="layer_name", op=" ? ")
  1. how does TensorRT dilated convolutional operation (nvinfer1::IConvolutionLayer) access the weights and bias in the model ?



Not really. Only CHW information are included in the inputDims and outputDims.

Would you mind to test your model with TensorRT5.1 first. It is only available for desktop currently.
There is a fix for dilated conv in uff parser. Maybe it helps.


@AastaLLL Thanks for the response.

OK, i will try TensorRT 5.1 on my desktop to make sure if it can convert dilated conv correctly

and when will TensorRT 5.1 be available for Jetson Nano ?



TensorRT 5.1 will be available for Nano soon.

It’s recommended to test it with desktop v5.1 first.
There are lots of different type of dilated conv. Not sure if your case is covered or not.