Assertion `inputs[0].nbDims == 4 && inputs[0].d[1] == mNbClasses * 4' failed.'

Description

I am trying to optimise my Mask-RCNN model for better and faster inference. For this, I converted my .h5 model`to UFF using the instructions given in the TensorRT repo. The environment I used was:

CUDA: 10.0
cudnn: 7.6.2
TensorRT: 7.0.0.11

I successfully converted both the sample model and my custom model into UFF.

Then, I built the samples in another environment. This was because the first environment caused some errors. This time I used:

CUDA: 10.2
cudnn: 8.1
TensorRT: 7.0.0.11

I was able to compile and build the sample and even run it. For my custom model (now UFF), I made relevant changes to the mrcnn_config.h

diff --git a/content/TensorRT/samples/opensource/sampleUffMaskRCNN/backup_mrcnn_config.h b/content/TensorRT/samples/opensource/sampleUffMaskRCNN/mrcnn_config.h
index cf24cbc..746c294 100644
--- a/content/TensorRT/samples/opensource/sampleUffMaskRCNN/backup_mrcnn_config.h
+++ b/content/TensorRT/samples/opensource/sampleUffMaskRCNN/mrcnn_config.h
@@ -13,7 +13,6 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-
 #ifndef MASKRCNN_CONFIG_HEADER
 #define MASKRCNN_CONFIG_HEADER
 #include "NvInfer.h"
@@ -57,7 +56,7 @@ static const int FPN_CLASSIF_FC_LAYERS_SIZE = 1024;
 static const int TOP_DOWN_PYRAMID_SIZE = 256;
 
 // Number of classification classes (including background)
-static const int NUM_CLASSES = 1 + 80; // COCO has 80 classes
+static const int NUM_CLASSES = 1 + 3; // COCO has 80 classes
 
 // Length of square anchor side in pixels
 static const std::vector<float> RPN_ANCHOR_SCALES = {32, 64, 128, 256, 512};
@@ -85,86 +84,9 @@ static const int POST_NMS_ROIS_INFERENCE = 1000;
 // COCO Class names
 static const std::vector<std::string> CLASS_NAMES = {
     "BG",
-    "person",
-    "bicycle",
-    "car",
-    "motorcycle",
-    "airplane",
-    "bus",
-    "train",
-    "truck",
-    "boat",
-    "traffic light",
-    "fire hydrant",
-    "stop sign",
-    "parking meter",
-    "bench",
-    "bird",
-    "cat",
-    "dog",
-    "horse",
-    "sheep",
-    "cow",
-    "elephant",
-    "bear",
-    "zebra",
-    "giraffe",
-    "backpack",
-    "umbrella",
-    "handbag",
-    "tie",
-    "suitcase",
-    "frisbee",
-    "skis",
-    "snowboard",
-    "sports ball",
-    "kite",
-    "baseball bat",
-    "baseball glove",
-    "skateboard",
-    "surfboard",
-    "tennis racket",
-    "bottle",
-    "wine glass",
-    "cup",
-    "fork",
-    "knife",
-    "spoon",
-    "bowl",
-    "banana",
-    "apple",
-    "sandwich",
-    "orange",
-    "broccoli",
-    "carrot",
-    "hot dog",
-    "pizza",
-    "donut",
-    "cake",
-    "chair",
-    "couch",
-    "potted plant",
-    "bed",
-    "dining table",
-    "toilet",
-    "tv",
-    "laptop",
-    "mouse",
-    "remote",
-    "keyboard",
-    "cell phone",
-    "microwave",
-    "oven",
-    "toaster",
-    "sink",
-    "refrigerator",
-    "book",
-    "clock",
-    "vase",
-    "scissors",
-    "teddy bear",
-    "hair drier",
-    "toothbrush",
+    "panel",
+    "back",
+    "dock"
 };
 
 static const std::string MODEL_NAME = "mrcnn_nchw.uff";
@@ -174,4 +96,4 @@ static const std::vector<std::string> MODEL_OUTPUTS = {"mrcnn_detection", "mrcnn
 static const Dims2 MODEL_DETECTION_SHAPE{DETECTION_MAX_INSTANCES, 6};
 static const Dims4 MODEL_MASK_SHAPE{DETECTION_MAX_INSTANCES, NUM_CLASSES, 28, 28};
 } // namespace MaskRCNNConfig
-#endif
\ No newline at end of file
+#endif

I recompile everything and it goes well. But now, when I try to run the executable on my custom model, I get the following error:

&&&& RUNNING TensorRT.sample_maskrcnn # ./sample_uff_maskRCNN --datadir /content/TensorRT-7.0.0.11/data/faster-rcnn --fp16
[05/24/2021-10:58:34] [I] Building and running a GPU inference engine for Mask RCNN
[05/24/2021-10:58:36] [I] [TRT] UFFParser: Did not find plugin field entry image_size in the Registered Creator for layer ROI
sample_uff_maskRCNN: detectionLayerPlugin.cpp:221: void nvinfer1::plugin::DetectionLayer::check_valid_inputs(const nvinfer1::Dims*, int): Assertion `inputs[0].nbDims == 4 && inputs[0].d[1] == mNbClasses * 4' failed.
---------------------------------------------------------------------------
CalledProcessError                        Traceback (most recent call last)
<ipython-input-60-6d09927647ae> in <module>()
      1 
----> 2 get_ipython().run_cell_magic('shell', '', './sample_uff_maskRCNN --datadir $TRT_DATADIR --fp16')

2 frames
/usr/local/lib/python3.7/dist-packages/google/colab/_system_commands.py in check_returncode(self)
    137     if self.returncode:
    138       raise subprocess.CalledProcessError(
--> 139           returncode=self.returncode, cmd=self.args, output=self.output)
    140 
    141   def _repr_pretty_(self, p, cycle):  # pylint:disable=unused-argument

CalledProcessError: Command './sample_uff_maskRCNN --datadir $TRT_DATADIR --fp16' died with <Signals.SIGABRT: 6>.

Environment

Google Colaboratory

TensorRT Version: 7.0.0.11:
GPU Type: Tesla P100-PCIE-16GB:
Nvidia Driver Version:
CUDA Version: specified above:
CUDNN Version: specified above:
Operating System + Version: Ubuntu 18.04.5 LTS:
Python Version (if applicable): 3.7:
TensorFlow Version (if applicable): 1.5-gpu

Relevant Files

Colab/ipynb file: (removed)

Custom UFF file: link

mrcnn_config.h: mrcnn_config.h (3.6 KB)

Steps To Reproduce

The steps have been mentioned in the iPython notebook attached above.

Any help is appreciated.

Hi @pradan,

We request you to please try on latest TensorRT version 8.0.

UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.
Please check the below link for the same.

If you still face this issue, please share with us issue reproducible onnx model and scripts.

Thank you.

Thanks for this. I will try it. But is it also recommended if I plan to finally deploy my model using Deepstream (on NVIDIA Jetson) ?

Hi @pradan,

If you need further assistance, we recommend you to please post your concern on Deepstream forum to get better help.

Thank you.

I can’t find any method to convert my MASK-RCNN (h5) model into onnx format. Any valid method seems to be missing.

@pradan,

Hope this may help you.

Thank you.

This comment says

and I have trained my model using TF1.4-gpu.

Hi @pradan,

But UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, we recommend you to use ONNX parser to get better support.

Thank you.