Inference using TensorRT

Hi,

I am using TensorRT (Plugin API) for inference of the SSD network. I am able to speed up the network using the TensorRT framework. I am getting correct results till the mbox_conf_reshape layer, mbox_loc and mbox_priorbox. The results are wrong after the softmax layer (Just one layer in the network, Near the end). It is given in the documentation that the Softmax Layer calculates softmax across the channel dimension. My Tensor Shape before the Softmax layer is: C: 21, H: 8732, W: 1. But I don’t think the softmax is being calculated across the 21 length dimension. What could be the problem ? Thank you

Thank you

Hi,

A follow up question to the above question. If the shuffling of the data is the problem, then how can we add a shuffle layer to the network ? Can I do that by editing the prototxt file? Only C++ API example is given for the Shuffle layer, as I am using NvCaffeParser to parse the model rather than implementing the model in C++.

Thank you

Hi,

Which TensorRT version do you use?
Could you try our latest TensorRT 3 in JetPack 3.2 and share the results?

Thanks.

Hi,

I am using the latest Jetpack 3.2. I get the output dimensions from mbox_coonf_reshape(Custom Plugin - Not reshaping the data but just the dimensions) as (C: 21, H: 8732, W: 1). But the output is in the form of an array. So, If I do softmax along the first 21st elements, then the results are correct. But is I pass them through the softmax function, then the results are totally different. Should I permute the data along with the dimensions in the mbox_conf_layer or can I add createSSDPermute ?. Can you please share the code for the Reshape plugin and the softmax plugin you used for benchmarking ?

Thank you

Hi,

Looks like the error is caused by the unexpected output format of the custom plugin.
TensorRT requires NCHW data format. Please make sure your format is acceptable for TensorRT.

SSD related plugin is not official released to the public.
Sorry that we cannot share more information about this.

Thanks.

Hi AastaLLL,

Thanks for the help. The softmax calculates only along the channel dimension. I thought of permuting the data into the NCHW format by inserting a new layer before the softmax. But eventually I implemented the softmax layer custom Plugin to calculate softmax along the 2 dimension. I am obtaining exact results now.

Thanks