Error converting Maskrcnn keras model to TRT

Description

$ python3 uff_inference.py
[TensorRT] VERBOSE: Registered plugin creator - ::BatchTilePlugin_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::BatchedNMS_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::CoordConvAC version 1
[TensorRT] VERBOSE: Registered plugin creator - ::CropAndResize version 1
[TensorRT] VERBOSE: Registered plugin creator - ::DetectionLayer_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::FlattenConcat_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::GenerateDetection_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::GridAnchor_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::GridAnchorRect_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::InstanceNormalization_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::MultilevelCropAndResize_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::MultilevelProposeROI_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::NMS_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Normalize_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::PriorBox_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::ProposalLayer_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Proposal version 1
[TensorRT] VERBOSE: Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Region_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Reorg_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::ResizeNearest_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::RPROI_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::SpecialSlice_TRT version 1

[TensorRT] VERBOSE: UFFParser: mrcnn_mask_bn4/batchnorm/add_1 → [100,256,14,14]
[TensorRT] VERBOSE: UFFParser: Applying order forwarding to: mrcnn_mask_bn4/batchnorm/add_1
[TensorRT] VERBOSE: UFFParser: Parsing activation_74/Relu[Op: Activation]. Inputs: mrcnn_mask_bn4/batchnorm/add_1
[TensorRT] VERBOSE: UFFParser: activation_74/Relu → [100,256,14,14]
[TensorRT] VERBOSE: UFFParser: Applying order forwarding to: activation_74/Relu
[TensorRT] VERBOSE: UFFParser: Parsing mrcnn_mask_deconv/conv2d_transpose[Op: ConvTranspose]. Inputs: mrcnn_mask_deconv/kernel, mrcnn_mask_deconv/stack, activation_74/Relu
[TensorRT] VERBOSE: UFFParser: Inserting transposes for mrcnn_mask_deconv/conv2d_transpose
[TensorRT] ERROR: UffParser: Parser error: mrcnn_mask_deconv/conv2d_transpose: Invalid shape
[TensorRT] ERROR: Network must have at least one output
[TensorRT] ERROR: Network validation failed.
Traceback (most recent call last):
File “uff_inference.py”, line 96, in
inputs, outputs, bindings, stream = allocate_buffers(engine)
File “uff_inference.py”, line 54, in allocate_buffers
for binding in engine:
TypeError: ‘NoneType’ object is not iterable

Environment

TensorRT Version: 7.1.34
GPU Type: GeForce GTX 1080 Ti
Nvidia Driver Version: 440.33.01
CUDA Version: 10.2
CUDNN Version: 8.0.5
Operating System + Version: ubuntu 18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 1.15.0

Relevant Files

mrcnn_nchw.uff - Google Drive << model
uff_inference.py - Google Drive << inference code

Steps To Reproduce

TensorRT/samples/opensource/sampleUffMaskRCNN at release/7.1 · NVIDIA/TensorRT · GitHub << how to convert the pretrained .h5 model to mrcnn_nchw.uff above

run:
# on local ubuntu environment out of docker container
python3 uff_inference.py

Please help. Thank you.

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

@NVES

Thank you for quick answer.

On github repo, there is only uff conversion sample for maskrcnn, even on latest version of trt.

Are you suggesting using onnx instead ? Any sample i can use ?

Hi,

We recommend you to please use latest TensorRT verison. https://developer.nvidia.com/nvidia-tensorrt-8x-download
UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.
Please check the below link for the same.
GitHub - onnx/onnx-tensorrt: ONNX-TensorRT: TensorRT backend for ONNX

Thanks!