Error when porting faster-rcnn trained with Coco into TensorRt

I got the following error when porting faster-rcnn trained with Coco into TensorRt:

sample_fasterRCNN_debug: NvPluginFasterRCNN.cu:81: virtual void nvinfer1::plugin::RPROIPlugin::configure(const nvinfer1::Dims*, int, const nvinfer1::Dims*, int, int): Assertion `inputDims[0].d[0] == (2 * A) && inputDims[1].d[0] == (4 * A)’ failed.

It seems the dimension is mismatched.

The model trained with Coco is different with the one trained with Pascal VOC(Default model used in the sampleFasterRCNN project). The number of output class label is 81 instead of 21. And the number of anchors used in the coco model increases from 9 to 12.

Due to these differences, I made the following changes in the sampleFasterRCNN project:

Change OUTPUT_CLS_SIZE to 81 to match the label of Coco dataset.
Change anchorsScaleCount to 4 and anchorsScales to {4.0f, 8.0f, 16.0f, 32.0f}.
Change the num_output of rpn_cls_score and rpn_bbox_pred layers in the prototxt to 24 and 48.
Change the num_output of cls_score and bbox_pred layers in the prototxt to 81 and 324.

The model I used is the pretrained Faster RCNN from this link: https://github.com/rbgirshick/py-faster-rcnn/tree/master/models

Besides, I also find a similar question asked in the Nvidia Forum, but nobody reply… https://devtalk.nvidia.com/default/topic/1016199/gpu-accelerated-libraries/using-tensorrt-2-1-with-my-own-trained-caffemodel/

Please help!

Hi,

To use RPROIPlugin, it is required that:
inputDims[0].d[0] == (2 * A) && inputDims[1].d[0] == (4 * A)
where A = anchorsRatioCount * anchorsScaleCount.

There are lots of variants of Faster R-CNN and not easy for us to support all of them.
It’s recommended to use our custom API to implement your own plugin layer.

Here is an example for your reference:
/usr/src/tensorrt/samples/samplePlugin

Thanks.

Thank you for the quick reply.

So what is the inputDims?

And, could you share the source code of NvPluginFasterRCNN.cu to us?

Oh man, I just fixed it.

I forgot to change ReshapeCTo18 to ReshapeCTo24! Everything is good now. Thanks!

Good to know this : )
Thanks for sharing status with us.