Assertion `inputs[0].nbDims == 4 && inputs[0].d[1] == mNbClasses * 4' failed

Throughout this experiment, I used the GPU-enabled Google Colab ipython notebook with NVIDIA TESLA.

I am trying to optimise my custom Mask-RCNN model (.h5) using TensorRT 8.0.0.3

I successfully converted my custom model (with 4 classes) to Uff.

For running inference, I compile the given sample_uff_maskRCNN by simply altering its config file and replacing

 // Number of classification classes (including background)
-static const int NUM_CLASSES = 1 + 80; // COCO has 80 classes
+static const int NUM_CLASSES = 1 + 3; // COCO has 80 classes
 
 
 // COCO Class names
 static const std::vector<std::string> CLASS_NAMES = {
-    "BG",
-    "person",
-    "bicycle",
-    "car",
-    "motorcycle",
-    "airplane",
-    "bus",
-    "train",
-    "truck",
-    "boat",
-    "traffic light",
-    "fire hydrant",
-    "stop sign",
-    "parking meter",
-    "bench",
-    "bird",
-    "cat",
-    "dog",
-    "horse",
-    "sheep",
-    "cow",
-    "elephant",
-    "bear",
-    "zebra",
-    "giraffe",
-    "backpack",
-    "umbrella",
-    "handbag",
-    "tie",
-    "suitcase",
-    "frisbee",
-    "skis",
-    "snowboard",
-    "sports ball",
-    "kite",
-    "baseball bat",
-    "baseball glove",
-    "skateboard",
-    "surfboard",
-    "tennis racket",
-    "bottle",
-    "wine glass",
-    "cup",
-    "fork",
-    "knife",
-    "spoon",
-    "bowl",
-    "banana",
-    "apple",
-    "sandwich",
-    "orange",
-    "broccoli",
-    "carrot",
-    "hot dog",
-    "pizza",
-    "donut",
-    "cake",
-    "chair",
-    "couch",
-    "potted plant",
-    "bed",
-    "dining table",
-    "toilet",
-    "tv",
-    "laptop",
-    "mouse",
-    "remote",
-    "keyboard",
-    "cell phone",
-    "microwave",
-    "oven",
-    "toaster",
-    "sink",
-    "refrigerator",
-    "book",
-    "clock",
-    "vase",
-    "scissors",
-    "teddy bear",
-    "hair drier",
-    "toothbrush",
+    "BG",
+    "rocket",
+    "shuttle",
+    "aliens"
 };

I also replace the sample uff file with mine but with the same name.

On running I get:

&&&& RUNNING TensorRT.sample_maskrcnn [TensorRT v8001] # ./sample_uff_maskRCNN --datadir /content/TensorRT-8.0.0.3/data/faster-rcnn --fp16
[07/30/2021-06:43:29] [I] Building and running a GPU inference engine for Mask RCNN
[07/30/2021-06:43:30] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 0, GPU 254 (MiB)
sample_uff_maskRCNN: detectionLayerPlugin.cpp:254: void nvinfer1::plugin::DetectionLayer::check_valid_inputs(const Dims*, int): Assertion `inputs[0].nbDims == 4 && inputs[0].d[1] == mNbClasses * 4' failed.

As per the error, the following mentioned assertion fails. I can’t understand how to debug this.

https://github.com/NVIDIA/TensorRT/blob/c2668947ea9ba4c73eb1182c162101f09ff250fd/plugin/detectionLayerPlugin/detectionLayerPlugin.cpp#L254

Hi @pradan ,
As mentioned in Release Notes :: NVIDIA Deep Learning TensorRT Documentation ,

We have deprecated the Caffe Parser and UFF Parser in TensorRT 7.0. They are still tested and functional in TensorRT 8.0, however, we plan to remove the support in the future. Ensure you migrate your workflow to use tf2onnx, keras2onnx or [TensorFlow-TensorRT (TF-TRT)](https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html) for deployment.

We don’t have plan to fix UFF related issue. So, could you export it to ONNX and run with TRT?

Thanks!

Porting to ONNX is only demonstrated using keras.load_model and keras only supports ResNet50 till today and not ResNet101. Hence the problem.

do you mean keras does not support ResNet101?

The only documentation I came across is Generating Inference Engine using Onnx parser on ResNet50-based Mask-RCNN model. This demonstrates loading model using

model = keras.applications.resnet.ResNet50(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)

In my case, I replace weights='imagenet' with the path to my h5 weights. Keras has no keras.applications.resnet.ResNet101 attribute, so I assume its only for resnet50.

I retrained my model using ResNet50 and tried again, but to no success; it always throws :

ValueError: You are trying to load a weight file containing 131 layers into a model with 106 layers.

even though while training, I used the imagent weights instead of coco.

Hi, pradan, is your new resnet50 model trained totally based on keras.application.resnet.Resnet50?

Keres has no resnet101? so which platform did you use to train your resnet101-mask-RCNN? tf1?

I followed the official Matterport instrtuctions and they do allow us to choose ResNet50 or ResNet101. Yes I used TF1.

Why don’t try to export your tf1.x model to onnx?

Trying that and another issue is running for it. Please find it here: [TensorRT] ERROR: Network must have at least one output