Sample for Mask R-CNN crashes

Hello all,

i converted the Mask R-CNN to UFF successfully with the TensorRT 7.0.0.11 tools. When i compile and run the TensorRT Mask R-CNN sample under Linux it crashes with this message:

&&&& RUNNING TensorRT.sample_maskrcnn # ./sample_uff_mask_rcnn
[03/09/2020-17:20:55] [I] Building and running a GPU inference engine for Mask RCNN
sample_uff_mask_rcnn: uff/UffParser.cpp:1073: std::shared_ptr UffParser::parseConv(const uff::Node&, const Fields&, NodesMap&): Assertion `isRegisteredConst(node.inputs(1))’ failed.

The problem is in the parsing of the UFF, the same problem occurs on Windows 10 as well.

Best Regards and Thanks,
Simon

Unfortunately, I met the same issue. I don’t know whether the problem happens in the conversion step or the parse step.

Hi,

Are you using the same steps and model file mentioned in below link:
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffMaskRCNN

In case you are using some different model, could you please share that so we can help better?

Meanwhile, you can also try trtexec cli command line tool in verbose mode to debug this issue
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thanks

Hi,

thank you for your reply! I followed the steps as in the link and also use the same h5 file (MD5 is e98aaff6f99e307b5e2a8a3ff741a518). The output of trtexec is (just the part at the end):

[03/19/2020-16:32:00] [V] [TRT] UFFParser: Applying order forwarding to: mrcnn_class_bn1/Reshape_2/shape
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Parsing mrcnn_class_bn1/Reshape_2[Op: Reshape]. Inputs: mrcnn_class_bn1/moving_variance, mrcnn_class_bn1/Reshape_2/shape
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Applying order forwarding to: mrcnn_class_bn1/Reshape_2
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Parsing mrcnn_class_bn1/batchnorm/add/y[Op: Const].
[03/19/2020-16:32:00] [V] [TRT] UFFParser: mrcnn_class_bn1/batchnorm/add/y →
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Applying order forwarding to: mrcnn_class_bn1/batchnorm/add/y
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Parsing mrcnn_class_bn1/batchnorm/add[Op: Binary]. Inputs: mrcnn_class_bn1/Reshape_2, mrcnn_class_bn1/batchnorm/add/y
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Applying order forwarding to: mrcnn_class_bn1/batchnorm/add
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Parsing mrcnn_class_bn1/batchnorm/Rsqrt[Op: Unary]. Inputs: mrcnn_class_bn1/batchnorm/add
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Applying order forwarding to: mrcnn_class_bn1/batchnorm/Rsqrt
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Parsing mrcnn_class_bn1/gamma[Op: Const].
[03/19/2020-16:32:00] [V] [TRT] UFFParser: mrcnn_class_bn1/gamma → [1024]
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Applying order forwarding to: mrcnn_class_bn1/gamma
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Parsing mrcnn_class_bn1/Reshape_4/shape[Op: Const].
[03/19/2020-16:32:00] [V] [TRT] UFFParser: mrcnn_class_bn1/Reshape_4/shape → [4]
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Applying order forwarding to: mrcnn_class_bn1/Reshape_4/shape
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Parsing mrcnn_class_bn1/Reshape_4[Op: Reshape]. Inputs: mrcnn_class_bn1/gamma, mrcnn_class_bn1/Reshape_4/shape
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Applying order forwarding to: mrcnn_class_bn1/Reshape_4
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Parsing mrcnn_class_bn1/batchnorm/mul[Op: Binary]. Inputs: mrcnn_class_bn1/batchnorm/Rsqrt, mrcnn_class_bn1/Reshape_4
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Applying order forwarding to: mrcnn_class_bn1/batchnorm/mul
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Parsing mrcnn_class_conv1/kernel[Op: Const].
[03/19/2020-16:32:00] [V] [TRT] UFFParser: mrcnn_class_conv1/kernel → [7,7,256,1024]
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Applying order forwarding to: mrcnn_class_conv1/kernel
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Parsing roi_align_classifier[Op: _PyramidROIAlign_TRT]. Inputs: ROI, fpn_p2/BiasAdd, fpn_p3/BiasAdd, fpn_p4/BiasAdd, fpn_p5/BiasAdd
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Parsing mrcnn_class_conv1/convolution[Op: Conv]. Inputs: mrcnn_class_conv1/kernel, roi_align_classifier
[03/19/2020-16:32:00] [V] [TRT] UFFParser: Inserting transposes for mrcnn_class_conv1/convolution

I ran trtexec with:
trtexec --uff=data\maskrcnn\mrcnn_nchw.uff --uffInput=“input_image”,3,1024,1024 --output=“mrcnn_mask/Sigmoid” --verbose

The debug output of the M RCNN sample stops at the same point, the last output of the logger is:

UFFParser: Parsing mrcnn_class_conv1/convolution[Op: Conv]. Inputs: mrcnn_class_conv1/kernel, roi_align_classifier
UFFParser: Inserting transposes for mrcnn_class_conv1/convolution

I am using Cuda 10.2 with tensorflow 1.14.0.

Thanks, Simon

Hi,

Can you check the number of nodes in generated UFF model? If transfer is successful, there will be 3049 nodes.
Also please refer to below link for known issue:
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffMaskRCNN#known-issues

Thanks

Hi,

thank you for the reply! I get 3044 nodes when running the converter. Even when i downgraded CUDA 10.2 to 10.0 i get the same result. I am using an Anaconda environment, “conda list cudatoolkit” shows version 10.0.130 and “conda list cudnn” shows version 7.6.5.

Best Regards,
Simon

Hi,
It seems there is issue with UFF generated model.
You can follow following steps to generate the UFF model.

  1. nvcr.io/nvidia/tensorflow:18.12-py3

  2. install uff, graphsugeon, keras==2.1.3

  3. Modify uff’s transpose convolution

  4. Export:
    NOTE: UFF has been tested with TensorFlow 1.14.0.
    WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
    UFF Version 0.6.5
    === Automatically deduced input nodes ===
    [name: “input_image”
    op: “Placeholder”
    attr {
    key: “dtype”
    value {
    type: DT_FLOAT
    }
    }
    attr {
    key: “shape”
    value {
    shape {
    dim {
    size: -1
    }
    dim {
    size: 3
    }
    dim {
    size: 1024
    }
    dim {
    size: 1024
    }
    }
    }
    }
    ]
    =========================================
    Using output node mrcnn_detection
    Using output node mrcnn_mask/Sigmoid
    Converting to UFF graph
    Warning: No conversion function registered for layer: PyramidROIAlign_TRT yet.
    Converting roi_align_mask_trt as custom op: PyramidROIAlign_TRT
    Warning: No conversion function registered for layer: ResizeNearest_TRT yet.
    Converting fpn_p5upsampled as custom op: ResizeNearest_TRT
    Warning: No conversion function registered for layer: ResizeNearest_TRT yet.
    Converting fpn_p4upsampled as custom op: ResizeNearest_TRT
    Warning: No conversion function registered for layer: ResizeNearest_TRT yet.
    Converting fpn_p3upsampled as custom op: ResizeNearest_TRT
    Warning: No conversion function registered for layer: SpecialSlice_TRT yet.
    Converting mrcnn_detection_bboxes as custom op: SpecialSlice_TRT
    Warning: No conversion function registered for layer: DetectionLayer_TRT yet.
    Converting mrcnn_detection as custom op: DetectionLayer_TRT
    Warning: No conversion function registered for layer: ProposalLayer_TRT yet.
    Converting ROI as custom op: ProposalLayer_TRT
    Warning: keepdims is ignored by the UFF Parser and defaults to True
    Warning: keepdims is ignored by the UFF Parser and defaults to True
    Warning: keepdims is ignored by the UFF Parser and defaults to True
    Warning: keepdims is ignored by the UFF Parser and defaults to True
    Warning: keepdims is ignored by the UFF Parser and defaults to True
    Warning: keepdims is ignored by the UFF Parser and defaults to True
    Warning: keepdims is ignored by the UFF Parser and defaults to True
    Warning: keepdims is ignored by the UFF Parser and defaults to True
    Warning: keepdims is ignored by the UFF Parser and defaults to True
    Warning: keepdims is ignored by the UFF Parser and defaults to True
    Warning: No conversion function registered for layer: PyramidROIAlign_TRT yet.
    Converting roi_align_classifier as custom op: PyramidROIAlign_TRT
    DEBUG [/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py:96] Marking [‘mrcnn_detection’, ‘mrcnn_mask/Sigmoid’] as outputs
    No. nodes: 3049

  5. This will only generate UFF. Once the uff is generated, you can use uff in TensorRT 7.0 to generate TRT model

Thanks

Hi,

thanks again for your help! It took me a while to get the docker image running with gpu, i had to use my linux system. The converted model now works with the sample, however i still just get 3044 nodes. Then i tried to use my linux system without docker for conversion, running cuda 10.0 and tensorflow 1.12.0. This also works fine with the model. That is a bit confusing since the UFF converter tells it is tested with tensorflow 1.14.0.

Best Regards and thanks,
Simon

Hi,

This might be due to below issue:
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffMaskRCNN#known-issues

Thanks