Faster RCNN using Googlenet as feature extractor in TensorRT 4

Hi,

I changed a Faster RCNN model with a Googlenet feature extractor (trained with Caffe) in TensorRT 4, I got the following error:

ERROR: inception_5a/output: all concat input tensors must have the same dimensions except on the concatenation axis

RPROIFused outputs a blob, and the blob is passed to an inception modele (inception_5a/), then inception_5a/output concatenates four tensors from the module in the architecture.

  1. How to solve the error above ?
  2. How can I get the dimension of output blob from RPROIFused ?
  3. Is it possible to print input/ output dimensions of blobs in each layer when building a TensorRT network ?

The following link contains the model and prototxt I used:
https://goo.gl/C5NUZs

thank you for providing the model repro. reviewing now.

(Update)
Hi, the output dimension of RPROIFused is normal. The error might occur when inception_5a/output tries to concatenate four tensors together. Each blob is a 4D tensor (NCHW), and another error message shows nvinfer1::DimsCHW nvinfer1::getCHW(const nvinfer::Dims&): Assertion `d.nbDims >=3’ failed.

Does the message above mean concatenation layer in TensorRT doesn’t support concatenate 4D tensors together?

Hello,

Per engineering:

With TRT5 engineering was able to get past the parsing error. Engineering modified the prototxt to something similar to what we do in the shipped fasterRCNN sample (please ask the customer to refer to that sample)

This is the change I made to the prototxt. Customer may have to update the param values based on their network

layer {
  name: "RPROIFused"
  type: "RPROI"
  bottom: 'rpn_cls_prob_reshape'
  bottom: 'rpn_bbox_pred'
  bottom: 'inception_4e/output'
  bottom: 'im_info'
  top: 'rois'
  top: 'roi_pool4'
  region_proposal_param {
    feature_stride: 16
    prenms_top: 6000
    nms_max_out: 300
    anchor_ratio_count: 3
    anchor_scale_count: 3
    iou_threshold: 0.7
    min_box_size: 16
    anchor_ratio: 0.5
    anchor_ratio: 1.0
    anchor_ratio: 2.0
    anchor_scale: 8.0
    anchor_scale: 16.0
    anchor_scale: 32.0
  }
  roi_pooling_param {
    pooled_h: 7
    pooled_w: 7
    spatial_scale: 0.0625
  }
}

By doing this, in TRT 5, the caffe parser will add the plugin to the network using the plugin registry and also populate all the necessary plugin params from the prototxt. This is similar to what we do in the fasterRCNN sample.

Regarding the question, to get the output dimension of the plugin, you can call the getOutputDimension function on the created plugin object. There is no way to print the output dimensions by default for the caffe parser.

Hi, sorry for late reply.

I’ve change my prototxt settings based on the content you provide above, and execute the faster rcnn sample in TensorRT 5.0.2.6.

However, the error message still remains:

ERROR: inception_5a/output: all concat input tensors must have the same dimensions except on the concatenation axis

nvinfer1::DimsCHW enginehelper::getCHW(const nvinfer1::Dims&): Assertion `d.nbDims >=3' failed.

I’m wondering if the Concat layer whose name is inception_5a/output doesn’t accept 4D input tensors, do input tensors to Concat layer in TensorRT must be 3D or 2D tensors ?

Hi all,

I confirm that I also have the same problem as RahnRYHuang, I get exactly the same error message.

I am using TensorRT 5.0.2.6 with CUDA 10 and cudnn 7.4, on Ubuntu 18.04 with RTX 2080.

When I try to add region_proposal_param and roi_pooling_param in the prototxt, I get the following error :

libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 1990:25: Message type "ditcaffe.LayerParameter" has no field named "region_proposal_param".
ERROR: CaffeParser: Could not parse deploy file

hello,have you solve this issue?

hello,have you solve this issue

inference: helpers.cpp:56: nvinfer1::DimsCHW nvinfer1::getCHW(const nvinfer1::Dims&): Assertion `d.nbDims >= 3' failed.

Hello,

You can refer to this thread (which concerns TensorRT 5)
https://devtalk.nvidia.com/default/topic/1047023/tensorrt/error-with-concat-layer/post/5314365/#5314365

and the conclusion is that the problem will be solved in next Tensorrt release (5.1 ?).

Thank you for your reply. I will pay close attention to it.

Dear 1014360228:

We replace the concatenate layer and the inception module after RPROIFused layer with fc layers in our model to avoid the error. But

ERROR: inception_5a/output: all concat input tensors must have the same dimensions except on the concatenation axis

still remains if we concatenate 4D tensors together

Hi, RahnRYHuang

The concat error may origin from the incorrect handling in RPROIFused plugin.

The dimension of rpn_bbox_pred changes with the customized GoogleNet feature layer.
There are some parameters in the RPROIFused plugin and they also need to be updated.

Could you share the original .prototxt used for training with us?
We will check how to customize the plugin based on your model.

Thanks.

Hi, AastaLLL,

The attachments are the training and testing prototxt files of our customized Faster RCNN model. We also found another error of placing pooling layer after RPROIFused layer called

ERROR: cudnnPoolingLayer.cpp (130) - Cudnn Error in execute: 3

. The version of TensorRT we use is 4.0.1.6.

Thank you.

test.prototxt.txt (41.3 KB)
train.prototxt.txt (43.3 KB)

Hi,

Thanks for your data.
We will check it and update more information with you later.

Stay tuned.

Hi, RahnRYHuang

We are still checking this issue.
Will update more information with you soon.

Thanks.

I am also trying to deploy fastercnn with googlenet as feature extractr. I am having a similar problem.

[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 2004:25: Message type “ditcaffe.LayerParameter” has no field named “region_proposal_param”.
[TensorRT] ERROR: CaffeParser: Could not parse deploy file
[TensorRT] ERROR: Failed to parse caffe model
File “/usr/lib/python2.7/dist-packages/tensorrt/legacy/utils/init.py”, line 352, in caffe_to_trt_engine
assert(blob_name_to_tensor)
Traceback (most recent call last):
File “”, line 7, in
File “/usr/lib/python2.7/dist-packages/tensorrt/legacy/utils/init.py”, line 360, in caffe_to_trt_engine
raise AssertionError(‘Caffe parsing failed on line {} in statement {}’.format(line, text))
AssertionError: Caffe parsing failed on line 352 in statement assert(blob_name_to_tensor)

Can you please look into the error. Here are my prototxtfile and caffemodel:
https://drive.google.com/drive/folders/1DwEMslkdhUP7QbrsruSF9PL6EhjOZIch?usp=sharing