paring the UFF network with the uffparser and making an engine with build_cuda_engine fails

Here I want to build a cuda engine from a deeplab tensorflow frozen graph for semantic segmentaion task. Here is how I make a UFF model:

OUTPUT_NAME = ["SemanticPredictions"]
forzen_tf_graph_file = 'frozen_inference_graph.pb'

# generate uff from frozen graph
uff_model = uff.from_tensorflow_frozen_model(
    frozen_file=forzen_tf_graph_file,
    output_nodes=OUTPUT_NAME,
    output_filename='uff_tf_model.uff',
    text=False,
)

and here is the result from the above code:

UFF Version 0.5.5
=== Automatically deduced input nodes ===
[name: "ImageTensor"
op: "Placeholder"
attr {
  key: "_output_shapes"
  value {
    list {
      shape {
        dim {
          size: 1
        }
        dim {
          size: -1
        }
        dim {
          size: -1
        }
        dim {
          size: 3
        }
      }
    }
  }
}
attr {
  key: "dtype"
  value {
    type: DT_UINT8
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: 1
      }
      dim {
        size: -1
      }
      dim {
        size: -1
      }
      dim {
        size: 3
      }
    }
  }
}
]
=========================================

Using output node SemanticPredictions
Converting to UFF graph
Warning: No conversion function registered for layer: Slice yet.
Converting SemanticPredictions as custom op: Slice
Warning: No conversion function registered for layer: ArgMax yet.
Converting ArgMax as custom op: ArgMax
Warning: No conversion function registered for layer: ResizeBilinear yet.
Converting ResizeBilinear_3 as custom op: ResizeBilinear
Warning: No conversion function registered for layer: ResizeBilinear yet.
Converting ResizeBilinear_2 as custom op: ResizeBilinear
Warning: No conversion function registered for layer: BatchToSpaceND yet.
Converting MobilenetV2/expanded_conv_16/depthwise/depthwise/BatchToSpaceND as custom op: BatchToSpaceND
Warning: No conversion function registered for layer: SpaceToBatchND yet.
Converting MobilenetV2/expanded_conv_16/depthwise/depthwise/SpaceToBatchND as custom op: SpaceToBatchND
Warning: No conversion function registered for layer: BatchToSpaceND yet.
Converting MobilenetV2/expanded_conv_15/depthwise/depthwise/BatchToSpaceND as custom op: BatchToSpaceND
Warning: No conversion function registered for layer: SpaceToBatchND yet.
Converting MobilenetV2/expanded_conv_15/depthwise/depthwise/SpaceToBatchND as custom op: SpaceToBatchND
Warning: No conversion function registered for layer: BatchToSpaceND yet.
Converting MobilenetV2/expanded_conv_14/depthwise/depthwise/BatchToSpaceND as custom op: BatchToSpaceND
Warning: No conversion function registered for layer: SpaceToBatchND yet.
Converting MobilenetV2/expanded_conv_14/depthwise/depthwise/SpaceToBatchND as custom op: SpaceToBatchND
Warning: No conversion function registered for layer: BatchToSpaceND yet.
Converting MobilenetV2/expanded_conv_13/depthwise/depthwise/BatchToSpaceND as custom op: BatchToSpaceND
Warning: No conversion function registered for layer: SpaceToBatchND yet.
Converting MobilenetV2/expanded_conv_13/depthwise/depthwise/SpaceToBatchND as custom op: SpaceToBatchND
Warning: No conversion function registered for layer: BatchToSpaceND yet.
Converting MobilenetV2/expanded_conv_12/depthwise/depthwise/BatchToSpaceND as custom op: BatchToSpaceND
Warning: No conversion function registered for layer: SpaceToBatchND yet.
Converting MobilenetV2/expanded_conv_12/depthwise/depthwise/SpaceToBatchND as custom op: SpaceToBatchND
Warning: No conversion function registered for layer: BatchToSpaceND yet.
Converting MobilenetV2/expanded_conv_11/depthwise/depthwise/BatchToSpaceND as custom op: BatchToSpaceND
Warning: No conversion function registered for layer: SpaceToBatchND yet.
Converting MobilenetV2/expanded_conv_11/depthwise/depthwise/SpaceToBatchND as custom op: SpaceToBatchND
Warning: No conversion function registered for layer: BatchToSpaceND yet.
Converting MobilenetV2/expanded_conv_10/depthwise/depthwise/BatchToSpaceND as custom op: BatchToSpaceND
Warning: No conversion function registered for layer: SpaceToBatchND yet.
Converting MobilenetV2/expanded_conv_10/depthwise/depthwise/SpaceToBatchND as custom op: SpaceToBatchND
Warning: No conversion function registered for layer: BatchToSpaceND yet.
Converting MobilenetV2/expanded_conv_9/depthwise/depthwise/BatchToSpaceND as custom op: BatchToSpaceND
Warning: No conversion function registered for layer: SpaceToBatchND yet.
Converting MobilenetV2/expanded_conv_9/depthwise/depthwise/SpaceToBatchND as custom op: SpaceToBatchND
Warning: No conversion function registered for layer: BatchToSpaceND yet.
Converting MobilenetV2/expanded_conv_8/depthwise/depthwise/BatchToSpaceND as custom op: BatchToSpaceND
Warning: No conversion function registered for layer: SpaceToBatchND yet.
Converting MobilenetV2/expanded_conv_8/depthwise/depthwise/SpaceToBatchND as custom op: SpaceToBatchND
Warning: No conversion function registered for layer: BatchToSpaceND yet.
Converting MobilenetV2/expanded_conv_7/depthwise/depthwise/BatchToSpaceND as custom op: BatchToSpaceND
Warning: No conversion function registered for layer: SpaceToBatchND yet.
Converting MobilenetV2/expanded_conv_7/depthwise/depthwise/SpaceToBatchND as custom op: SpaceToBatchND
Warning: No conversion function registered for layer: ExpandDims yet.
Converting ExpandDims_1 as custom op: ExpandDims
Warning: No conversion function registered for layer: ResizeBilinear yet.
Converting ResizeBilinear as custom op: ResizeBilinear
Warning: No conversion function registered for layer: Cast yet.
Converting ToInt32 as custom op: Cast
Warning: No conversion function registered for layer: Cast yet.
Converting ToFloat_1 as custom op: Cast
Warning: No conversion function registered for layer: Cast yet.
Converting Cast as custom op: Cast
Warning: No conversion function registered for layer: ExpandDims yet.
Converting ExpandDims as custom op: ExpandDims
Warning: No conversion function registered for layer: ResizeBilinear yet.
Converting ResizeBilinear_1 as custom op: ResizeBilinear
No. nodes: 594
UFF Output written to uff_tf_model.uff

And here I am trying to build the engine:

TRT_LOGGER = tensorrt.Logger(tensorrt.Logger.WARNING)

# Loads a test case into the provided pagelocked_buffer.
def load_normalized_test_case(img_path, pagelocked_buffer):
    # Flatten the image into a 1D array, normalize, and copy to pagelocked memory.
    img = cv2.imread(img_path, 1)
    np.copyto(pagelocked_buffer, img)

# For more information on TRT basics, refer to the introductory samples.
with tensorrt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, tensorrt.UffParser() as parser:
    builder.max_workspace_size = common.GiB(1)

    # Parse the Uff Network
    parser.register_input("Imagetensor", (513,289,3))
    parser.register_output("SemanticPredictions")
    parser.parse("uff_tf_model.uff", network)

    # Build and return an engine.
    engine = builder.build_cuda_engine(network)

    # Build an engine, allocate buffers and create a stream.
    # For more information on buffer allocation, refer to the introductory samples.
    inputs, outputs, bindings, stream = common.allocate_buffers(engine)
    with engine.create_execution_context() as context:
        load_normalized_test_case(img_path='test.png', pagelocked_buffer=inputs[0].host)
        # For more information on performing inference, refer to the introductory samples.
        # The common.do_inference function will return a list of outputs - we only have one in this case.
        seg_map = common.do_inference(context, bindings=bindings, inputs=inputs, outputs=outputs, stream=stream)

And here is the output from the last script when I run it:

[TensorRT] ERROR: UFFParser: Validator error: ResizeBilinear_3: Unsupported operation _ResizeBilinear
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
  File "/home/pedram/catkin-ws/src/seg_proj/scripts/test.py", line 323, in <module>
    inputs, outputs, bindings, stream = common.allocate_buffers(engine)
  File "/home/pedram/catkin-ws/src/seg_proj/scripts/common.py", line 124, in allocate_buffers
    for binding in engine:
TypeError: 'NoneType' object is not iterable

So because of these errors, after the parsing process, the ‘network’ object is empty and therefore builder.build_cuda_engine(network) returns None. I cannot figure out what is going wrong here.

P.S: I am using:
Ubuntu 16.04
GPU: Nvidia 1050ti
Nvidia driver version: 384.130
Cuda: 9.0
Cudnn: 7
Python: 2.7
Tensroflow version: 1.13.0rc
TensorRT version: 5.0.2.6

The Deeplab frozen graph: http://download.tensorflow.org/models/deeplabv3_mnv2_cityscapes_train_2018_02_05.tar.gz

The test image could be sth like this: https://i.ibb.co/rwRZMvq/rsz-louvl.png

can you share the output log during the build process?

Sorry, I did not understand you, can you please tell me where I can access the output log?

OK, I didn’t see the build error log earlier, now I see it.

[TensorRT] ERROR: UFFParser: Validator error: ResizeBilinear_3: Unsupported operation _ResizeBilinear

,
It looks like the model you are converting to TensorRT contains an unsupported operation “_ResizeBilinear”.
For a list of supported operations, please reference: https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#support_op

For unsupported layers, users can extend TensorRT functionalities by implementing custom layers using the IPluginV2 class for the C++ and Python API. Custom layers, often referred to as plugins, are implemented and instantiated by an application, and their lifetime must span their use within a TensorRT engine. https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#extending

regards,
NVIDIA Enterprise Support