convert_to_uff.py fails to convert customer layers

Linux distro and version: Linux 16.04
GPU type: Geforce 1080
nvidia Driver Version: 384.130
CUDA version: release 9.0, V9.0.225
CUDNN version:7
Python version: 3.5
Tensorflow version:1.8.0
TensorRT version: 5.0.2.6

Describe the problem:
convert_to_uff.py failed to convert .pb to .uff.
command:
python3 /usr/local/lib/python3.5/dist-packages/uff/bin/convert_to_uff.py trainedweights.pb -p config.py

In config.py are unsupported customer layers. It seems like that I need to register these layers beforehand. But I don’t know how.

config.py:

import graphsurgeon as gs
import tensorflow as tf
import tensorrt as trt
 
TRT_LOGGER = trt.Logger()
trt.init_libnvinfer_plugins(TRT_LOGGER, '')

ROIAlignClassifier=gs.create_plugin_node(name="roi_align_classifier_TRT", op="roi_align_classifier_Plugin_TRT",
    dtype=tf.float32,
    pool_size=14,
    image_shape=[448,448,3]
    )
ROIAlignKpMask=gs.create_plugin_node(name="roi_align_kpmask_TRT", op="roi_align_kpmask_Plugin_TRT",
    dtype=tf.float32,
    pool_size=14,
    image_shape=[448,448,3]
    )
 
mrcnn_keypoint_mask_upsample_2= gs.create_plugin_node(name="upsample_2_TRT", op="upsample_2_Plugin_TRT",
    dtype=tf.float32,    
    )

mrcnn_keypoint_mask_upsample_1= gs.create_plugin_node(name="upsample_1_TRT", op="upsample_1_Plugin_TRT",
    dtype=tf.float32,    
    )    
fpn_p5upsampled= gs.create_plugin_node(name="upsample_5_TRT", op="upsample_5_Plugin_TRT",
    dtype=tf.float32,    
    )
fpn_p4upsampled= gs.create_plugin_node(name="upsample_4_TRT", op="upsample_4_Plugin_TRT",
    dtype=tf.float32,    
    ) 
fpn_p3upsampled= gs.create_plugin_node(name="upsample_3_TRT", op="upsample_3_Plugin_TRT",
    dtype=tf.float32,    
    )             
 
 
namespace_plugin_map = {
    "roi_align_classifier": ROIAlignClassifier,
    "roi_align_keypoint_mask": ROIAlignKpMask,
    "fpn_p3upsampled":fpn_p3upsampled,
    "fpn_p4upsampled":fpn_p4upsampled,
    "fpn_p5upsampled":fpn_p5upsampled,
    "mrcnn_keypoint_mask_upsample_1":mrcnn_keypoint_mask_upsample_1,
    "mrcnn_keypoint_mask_upsample_2":mrcnn_keypoint_mask_upsample_2,
 
}

def preprocess(dynamic_graph):
    # Now create a new graph by collapsing namespaces
    dynamic_graph.collapse_namespaces(namespace_plugin_map)
    # Remove the outputs, so we just have a single output node (NMS).
    dynamic_graph.remove(dynamic_graph.graph_outputs, remove_exclusive_dependencies=False)

Error and warnings

Using output node mrcnn_prob/Sum
Converting to UFF graph
Warning: keepdims is ignored by the UFF Parser and defaults to True
Warning: keepdims is ignored by the UFF Parser and defaults to True
Warning: No conversion function registered for layer: upsample_2_Plugin_TRT yet.
Converting upsample_2_TRT as custom op: upsample_2_Plugin_TRT
Warning: No conversion function registered for layer: upsample_1_Plugin_TRT yet.
Converting upsample_1_TRT as custom op: upsample_1_Plugin_TRT
Warning: No conversion function registered for layer: roi_align_kpmask_Plugin_TRT yet.
Converting roi_align_kpmask_TRT as custom op: roi_align_kpmask_Plugin_TRT
Warning: No conversion function registered for layer: upsample_5_Plugin_TRT yet.
Converting upsample_5_TRT as custom op: upsample_5_Plugin_TRT
Warning: No conversion function registered for layer: upsample_4_Plugin_TRT yet.
Converting upsample_4_TRT as custom op: upsample_4_Plugin_TRT
Warning: No conversion function registered for layer: upsample_3_Plugin_TRT yet.
Converting upsample_3_TRT as custom op: upsample_3_Plugin_TRT
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/uff/bin/convert_to_uff.py", line 93, in <module>
    main()
  File "/usr/local/lib/python3.5/dist-packages/uff/bin/convert_to_uff.py", line 89, in main
    debug_mode=args.debug
  File "/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 187, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 157, in from_tensorflow
    debug_mode=debug_mode)
  File "/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py", line 94, in convert_tf2uff_graph
    uff_graph, input_replacements, debug_mode=debug_mode)
  File "/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py", line 79, in convert_tf2uff_node
    op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes, debug_mode=debug_mode)
  File "/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py", line 47, in convert_layer
    return cls.registry_[op](name, tf_node, inputs, uff_graph, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter_functions.py", line 410, in convert_strided_slice
    raise ValueError("ellipsis_mask not supported")
ValueError: ellipsis_mask not supported

I found layers containing “ellipsis_mask”, rewrote config.py. Now I can get .uff file successfully. But I still need to add custom layers that are unsupported in UFF using TensorRT plugin API in c++

Sorry, I was quite confused some problems that how do you know those names of “op” and how to write the format of namespace_plugin_map?

For example, in your ROIAlignClassifier, you wrote that op = “roi_align_classifier_Plugin_TRT”.
However, I cannot find it in other websites or documents.

Because I cannot find any instruction, I got struggle with writing config.py file…

Would you mind to explain a little bit this parts?

Thanks a lot!!!