converting mask rcnn to tensor rt

Hi

We are trying to convert a mask rcnn module to tensor rt4 or 3 in order to run on top of v100 for better performance.

Our current implementation is using keras and tensorflow.

The project exists on GitHub

We can also try to use caffe2 facebook implementation for mask rcnn also on GitHub

https://github.com/facebookresearch/Detectron

Or any other framework mx or tensorflow pytorch…

We are wondering which one would be easiest to convert taking unto consideration custom layers exist.

If you are familiar with an existing maskrcnn implementation that was already converted successfully to tensor rt we would be grateful for any help with mask rcnn.

Many thanks

Hi,

Do you want to use DeepStream SDK?
Currently, DeepStream SDK only supports TensorRT3. Not ready for TensorRT4 yet.

For RCNN use case, you can check our FasterRCNN sample in the ‘/usr/src/tensorrt/samples/sampleFasterRCNN’ for information.
Thanks.

Hi
thanks for the reply
I just want to run mask rcnn using the v100 tensor cores for performance
the only way to do that if I understand correctly is to convert the model to tensorRT, as far as I understand
tensor RT3 does not support custom layers in keras nor does it support cafe2 that why I thought using tensorrt4
Faster rcnn does not comply with our needs as we need the masks.
any tips towards how to approach the issue
many thanks
Eran

Hi,

Our latest TensorRT 4 should be good for your use-case.
Uff parser, which converts TensorFlow model into TensorRT, supports custom layer from TensorRT 4.

Currently, we don’t have a dedicated example for RCNN mask case.
A recommended workflow is TensorFlow → UFF → TensorRT+Plugin, and you can find some sample for each step in /usr/src/tensorrt.

Thanks.

Hi,

Is there a schedule when DeepStream will get ready for TensorRT 4?

Thanks.

Hi,

Sorry that we cannot disclosure our schedule here.
Please pay attention to our announcement for the latest release.

Thanks.

Hi,

Is there a MaskRCNN sample available for TensorRT4? I need to know to to create my config.py file to be used as a preprocessor. I am using the matterport mask rcnn model as well.

Thank You!

Hi,

Sorry, we don’t have an experience on the MaskRCNN + TensorRT.
Could you check if TF-TRT helps on your use case?
[url]https://github.com/NVIDIA-Jetson/tf_trt_models[/url]

Thanks.

You can take out the custom nodes pretty easily

Have you done it? I went through the entire process, but i do not have time right now to finish it. So i am hoping to do this in november. If you have, can you share the steps you took?

Thanks alot!

This should help https://github.com/matterport/Mask_RCNN/pull/167/commits/296d5b55206586fb77ca074d7da66594f1d6eae5

Okay thank you very much,

My problem is here.

I found out that since the matterport mask rcnn model is not in the same structure as the mask rcnn models available in the tensorflow model zoo, i have replace alot of custom nodes in my config.py file right? Becuase tensor RT documentation is meant to support custom layers from the tensorflow model zoo.

So currently i have to use the uff-parser file and make a .uff file and then use the convert_plan script to create a .plan file thereafter.

I need a clear idea about how to get the matterport mask rcnn model converted into its intermediate uff file format, which is what i couldnt figure out clearly.

I have run the model in C++ which doesnt support custom nodes.

I will do some work on TensorRT, and then I can share the results.

Thank you very much!

Do you have a matterport mask rcnn C++ reference? If it is okay with you, please share it with me

Hi,
I am trying to use maskRCNN with a resnet50 backbone. The trained model has 5 classes. I am able to generate uff similar to the coco example provided at https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffMaskRCNN.

While execution sample_uff_maskRCNN it’s creating weights mismatch error. Error code

[10/06/2019-15:42:37] [I] Building and running a GPU inference engine for Mask RCNN
[10/06/2019-15:42:38] [E] [TRT] mrcnn_mask_conv2/convolution: kernel weights has count 589824 but 2359296 was expected
[10/06/2019-15:42:38] [E] [TRT] mrcnn_mask_conv2/convolution: count of 589824 weights in kernel, but kernel dimensions (3,3) with 1024 input channels, 256 output channels and 1 groups were specified. Expected Weights count is 1024 * 3*3 * 256 / 1 = 2359296
[10/06/2019-15:42:38] [E] [TRT] UffParser: Parser error: mrcnn_mask_conv2/BiasAdd: The input to the Scale Layer is required to have a minimum of 3 dimensions.
&&&& FAILED TensorRT.sample_maskrcnn # ./sample_uff_maskRCNN

While UFF and pbtxt has been generated successfully. Please help, if UFF and pbtxt generated from same file how can be weights missmatch?

Config Files I have used for generating uff are as attatched below.

import graphsurgeon as gs
import tensorflow as tf

fpn_p5upsampled = gs.create_plugin_node("fpn_p5upsampled", op="ResizeNearest_TRT", dtype=tf.float32, scale=2.0)
fpn_p4upsampled = gs.create_plugin_node("fpn_p4upsampled", op="ResizeNearest_TRT", dtype=tf.float32, scale=2.0)
fpn_p3upsampled = gs.create_plugin_node("fpn_p3upsampled", op="ResizeNearest_TRT", dtype=tf.float32, scale=2.0)

roi = gs.create_plugin_node("ROI", op="ProposalLayer_TRT", prenms_topk=1024, keep_topk=1000, iou_threshold=0.7)
roi_align_classifier = gs.create_plugin_node("roi_align_classifier", op="PyramidROIAlign_TRT", pooled_size=7)
mrcnn_detection = gs.create_plugin_node("mrcnn_detection", op="DetectionLayer_TRT", num_classes=5, keep_topk=100, score_threshold=0.7, iou_threshold=0.3)
roi_align_mask = gs.create_plugin_node("roi_align_mask_trt", op="PyramidROIAlign_TRT", pooled_size=14)
mrcnn_detection_bboxes = gs.create_plugin_node("mrcnn_detection_bboxes", op="SpecialSlice_TRT")

namespace_plugin_map = {
"fpn_p5upsampled":fpn_p5upsampled,

"fpn_p4upsampled":fpn_p4upsampled,

"fpn_p3upsampled":fpn_p3upsampled,

"roi_align_classifier":roi_align_classifier,

"mrcnn_detection":mrcnn_detection,

"ROI":roi,

"roi_align_mask":roi_align_mask,

"lambda_1": mrcnn_detection_bboxes,

}

timedistributed_remove_list = [
        "mrcnn_class_conv1/Reshape/shape", "mrcnn_class_conv1/Reshape", "mrcnn_class_conv1/Reshape_1/shape", "mrcnn_class_conv1/Reshape_1",
        "mrcnn_class_bn1/Reshape/shape", "mrcnn_class_bn1/Reshape", "mrcnn_class_bn1/Reshape_5/shape", "mrcnn_class_bn1/Reshape_5",
        "mrcnn_class_conv2/Reshape/shape", "mrcnn_class_conv2/Reshape", "mrcnn_class_conv2/Reshape_1/shape", "mrcnn_class_conv2/Reshape_1",
        "mrcnn_class_bn2/Reshape/shape", "mrcnn_class_bn2/Reshape", "mrcnn_class_bn2/Reshape_5/shape", "mrcnn_class_bn2/Reshape_5",
        "mrcnn_class_logits/Reshape/shape", "mrcnn_class_logits/Reshape","mrcnn_class_logits/Reshape_1/shape", "mrcnn_class_logits/Reshape_1",
        "mrcnn_class/Reshape/shape", "mrcnn_class/Reshape","mrcnn_class/Reshape_1/shape", "mrcnn_class/Reshape_1",
        "mrcnn_bbox_fc/Reshape/shape", "mrcnn_bbox_fc/Reshape","mrcnn_bbox_fc/Reshape_1/shape", "mrcnn_bbox_fc/Reshape_1",

        "mrcnn_mask_conv1/Reshape/shape", "mrcnn_mask_conv1/Reshape", "mrcnn_mask_conv1/Reshape_1/shape", "mrcnn_mask_conv1/Reshape_1",
        "mrcnn_mask_bn1/Reshape/shape", "mrcnn_mask_bn1/Reshape", "mrcnn_mask_bn1/Reshape_5/shape", "mrcnn_mask_bn1/Reshape_5",
        "mrcnn_mask_conv2/Reshape/shape", "mrcnn_mask_conv2/Reshape", "mrcnn_mask_conv2/Reshape_1/shape", "mrcnn_mask_conv2/Reshape_1",
        "mrcnn_mask_bn2/Reshape/shape", "mrcnn_mask_bn2/Reshape", "mrcnn_mask_bn2/Reshape_5/shape", "mrcnn_mask_bn2/Reshape_5",
        "mrcnn_mask_conv3/Reshape/shape", "mrcnn_mask_conv3/Reshape", "mrcnn_mask_conv3/Reshape_1/shape", "mrcnn_mask_conv3/Reshape_1",
        "mrcnn_mask_bn3/Reshape/shape", "mrcnn_mask_bn3/Reshape", "mrcnn_mask_bn3/Reshape_5/shape", "mrcnn_mask_bn3/Reshape_5",
        "mrcnn_mask_conv4/Reshape/shape", "mrcnn_mask_conv4/Reshape", "mrcnn_mask_conv4/Reshape_1/shape", "mrcnn_mask_conv4/Reshape_1",
        "mrcnn_mask_bn4/Reshape/shape", "mrcnn_mask_bn4/Reshape", "mrcnn_mask_bn4/Reshape_5/shape", "mrcnn_mask_bn4/Reshape_5",
        "mrcnn_mask_deconv/Reshape/shape", "mrcnn_mask_deconv/Reshape", "mrcnn_mask_deconv/Reshape_1/shape", "mrcnn_mask_deconv/Reshape_1",
        "mrcnn_mask/Reshape/shape", "mrcnn_mask/Reshape", "mrcnn_mask/Reshape_1/shape", "mrcnn_mask/Reshape_1",
        ]

timedistributed_connect_pairs = [
        ("mrcnn_mask_deconv/Relu", "mrcnn_mask/convolution"), # mrcnn_mask_deconv -> mrcnn_mask
        ("activation_40/Relu", "mrcnn_mask_deconv/conv2d_transpose"), #active74 -> mrcnn_mask_deconv
        ("mrcnn_mask_bn4/batchnorm/add_1","activation_40/Relu"),  # mrcnn_mask_bn4 -> active74
        ("mrcnn_mask_conv4/BiasAdd", "mrcnn_mask_bn4/batchnorm/mul_1"), #mrcnn_mask_conv4 -> mrcnn_mask_bn4
        ("activation_39/Relu", "mrcnn_mask_conv4/convolution"), #active73 -> mrcnn_mask_conv4
        ("mrcnn_mask_bn3/batchnorm/add_1","activation_39/Relu"), #mrcnn_mask_bn3 -> active73
        ("mrcnn_mask_conv3/BiasAdd", "mrcnn_mask_bn3/batchnorm/mul_1"), #mrcnn_mask_conv3 -> mrcnn_mask_bn3
        ("activation_38/Relu", "mrcnn_mask_conv3/convolution"), #active72 -> mrcnn_mask_conv3
        ("mrcnn_mask_bn2/batchnorm/add_1","activation_38/Relu"), #mrcnn_mask_bn2 -> active72
        ("mrcnn_mask_conv2/BiasAdd", "mrcnn_mask_bn2/batchnorm/mul_1"), #mrcnn_mask_conv2 -> mrcnn_mask_bn2
        ("activation_37/Relu", "mrcnn_mask_conv2/convolution"), #active71 -> mrcnn_mask_conv2
        ("mrcnn_mask_bn1/batchnorm/add_1","activation_37/Relu"), #mrcnn_mask_bn1 -> active71
        ("mrcnn_mask_conv1/BiasAdd", "mrcnn_mask_bn1/batchnorm/mul_1"), #mrcnn_mask_conv1 -> mrcnn_mask_bn1
        ("roi_align_mask_trt", "mrcnn_mask_conv1/convolution"), #roi_align_mask -> mrcnn_mask_conv1
        ("mrcnn_class_bn2/batchnorm/add_1","activation_35/Relu"), # mrcnn_class_bn2 -> active 69
        ("mrcnn_class_conv2/BiasAdd", "mrcnn_class_bn2/batchnorm/mul_1"), # mrcnn_class_conv2 -> mrcnn_class_bn2
        ("activation_37/Relu", "mrcnn_class_conv2/convolution"), # active 68 -> mrcnn_class_conv2
        ("mrcnn_class_bn1/batchnorm/add_1","activation_37/Relu"), # mrcnn_class_bn1 -> active 68
        ("mrcnn_class_conv1/BiasAdd", "mrcnn_class_bn1/batchnorm/mul_1"), # mrcnn_class_conv1 -> mrcnn_class_bn1
        ("roi_align_classifier", "mrcnn_class_conv1/convolution"), # roi_align_classifier -> mrcnn_class_conv1
        ]

dense_compatible_patch =["pool_squeeze/Squeeze", "pool_squeeze/Squeeze_1", #No need to squeeze the dimensions for TRT Dense Layer
        "mrcnn_bbox/Shape", "mrcnn_bbox/strided_slice/stack", # mrcnn_bbox(Reshape): No need to reshape, cause we can process it as 1-D array in detectionlayer's kernel
        "mrcnn_bbox/strided_slice/stack_1", "mrcnn_bbox/strided_slice/stack_2",
        "mrcnn_bbox/strided_slice", "mrcnn_bbox/Reshape/shape/1",
        "mrcnn_bbox/Reshape/shape/2", "mrcnn_bbox/Reshape/shape/3",
        "mrcnn_bbox/Reshape/shape", "mrcnn_bbox/Reshape"]

dense_compatible_connect_pairs = [
        ("activation_35/Relu","mrcnn_bbox_fc/MatMul"), #activation_35 -> mrcnn_bbox_fc
        ("activation_35/Relu", "mrcnn_class_logits/MatMul"), #activation_35 -> mrcnn_class_logits
        ("mrcnn_class_logits/BiasAdd", "mrcnn_class/Softmax"), #mrcnn_class_logits -> mrcnn_class
        ("mrcnn_class/Softmax", "mrcnn_detection"), #mrcnn_class -> mrcnn_detection
        ("mrcnn_bbox_fc/BiasAdd", "mrcnn_detection"), #mrcnn_bbox_fc -> mrcnn_detection
        ]

def connect(dynamic_graph, connections_list):

    for node_a_name, node_b_name in connections_list:
        if node_a_name not in dynamic_graph.node_map[node_b_name].input:
            dynamic_graph.node_map[node_b_name].input.insert(0, node_a_name)

def preprocess(dynamic_graph):
    # Now create a new graph by collapsing namespaces
    dynamic_graph.collapse_namespaces(namespace_plugin_map, unique_inputs=True)
    dynamic_graph.remove(timedistributed_remove_list)
    dynamic_graph.remove(dense_compatible_patch)
    dynamic_graph.remove(['input_anchors', 'input_image_meta'])
    connect(dynamic_graph, timedistributed_connect_pairs)
    connect(dynamic_graph, dense_compatible_connect_pairs)

Hi,

Have you followed the instructions in prerequisite first?
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffMaskRCNN#prerequisites

Especially, update the conv2d_transpose function converter_functions.py?

Thanks.

Can you provide pbtxt after doing this:

https://github.com/NVIDIA/TensorRT/blob/master/samples/opensource/sampleUffMaskRCNN/converted/mrcnn_to_trt_single.py

convert_model(model_A, output_file_path, output_nodes, preprocessor=args.preprocessor,
text=True, list_nodes=list_nodes)

Do you mind provide your H5 model? We can reproduce the issue in our side. You can upload it to google drive and give the link in private message.

Can you change your config line 71,72, activation_37 to be activation_34

70:        ("mrcnn_class_conv2/BiasAdd", "mrcnn_class_bn2/batchnorm/mul_1"), # mrcnn_class_conv2 -> mrcnn_class_bn2
71:        ("<b>activation_37</b>/Relu", "mrcnn_class_conv2/convolution"), # active 68 -> mrcnn_class_conv2
72        ("mrcnn_class_bn1/batchnorm/add_1","<b>activation_37</b>/Relu"), # mrcnn_class_bn1 -> active 68
73        ("mrcnn_class_conv1/BiasAdd", "mrcnn_class_bn1/batchnorm/mul_1"), # mrcnn_class_conv1 -> mrcnn_class_bn1
74        ("roi_align_classifier", "mrcnn_class_conv1/convolution"), # roi_align_classifier -> mrcnn_class_conv1

I would like to provide link mask_rcnn_nucleus_0080.h5 - Google Drive which will produce same error(I am using Mask_rcnn with Resnet50 and 2 class), and I have tried change activation_37 to activation_34, here is another error:

[TensorRT] ERROR: UffParser: Parser error: mrcnn_mask_deconv/conv2d_transpose: Invalid shape