TensorRT 2.1 Efficient way to have multi-batch size in the same network description


I’m using TensorRT 2.1 and integrated customize layers by using all the plugin factory features (without the caffe parser).

I’m actually working on a network topology wich use a plugin layer that change the batch size between the input tensor and the output tensor.

Example with “LayerType” to describe topology


Actually it seems that the only way to do that in TensorRT is to return 32 in the “virtual int getNbOutputs()” method and to call 32 different time [Convolution]i[/i] after MyPlugin layer by pluged it to the different 32 MyPlugin output layers.

However it seems to be inefficient and i would like if there is another method to do that in TensorRT 2.1?


Did you solve the problem?


I solve the problem by upgrading from TensorRT 2.1 to TensorRT 3.0 ! It’s seems that TensorRT 2.1 is full of features bug…

Which version of TensorRT do you using ?


I am using TensorRT 4.0.

So you still change the return value of getNbOutput to 32?

Now my getNbOutputs is returning 1 and my getOutputsDimensions is returning a DimsNCHW(batchSize, NbChannels, ChannelsHeight,ChannelsWidth) !

Try this i think it will solve your problem