The uff_custom_plugin of python sample

Hi,

I am sorry about that I would like to clarify some questions, even I knew that these are quite basic questions.

However, I am really confused about the part of building prepare_namespace_plugin_map and ModelData for the uff_custom_plugin of python sample.

In the lenet5.py, one part of original code is

def build_model():
    model = tf.keras.models.Sequential()
    model.add(tf.keras.layers.InputLayer(input_shape=[1, 28, 28], name="InputLayer"))
    model.add(tf.keras.layers.Flatten())
    model.add(tf.keras.layers.Dense(512))
    model.add(tf.keras.layers.Activation(activation=tf.nn.relu6, name="ReLU6"))
    model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax, name="OutputLayer"))
    return model

Because ReLU6 operation is not supported by TensorRT, we need to create it.
In the sample.py,

class ModelData(object):
    INPUT_NAME = "InputLayer"
    INPUT_SHAPE = (MNIST_CHANNELS, MNIST_IMAGE_SIZE, MNIST_IMAGE_SIZE)
    RELU6_NAME = "ReLU6"
    OUTPUT_NAME = "OutputLayer/Softmax"
    OUTPUT_SHAPE = (MNIST_IMAGE_SIZE, )
    DATA_TYPE = trt.float32

def prepare_namespace_plugin_map():
    trt_relu6 = gs.create_plugin_node(name="trt_relu6", op="CustomClipPlugin", clipMin=0.0, clipMax=6.0)
    namespace_plugin_map = {
        ModelData.RELU6_NAME: trt_relu6
    }
    return namespace_plugin_map

I showed the results below that what I did some tests before.

If I modify the name of relu6 to the name of relu6_1, I must need to modify the RELU6_NAME of ModelData area. Then it works well.

def build_model():
    model = tf.keras.models.Sequential()
    model.add(tf.keras.layers.InputLayer(input_shape=[1, 28, 28], name="InputLayer"))
    model.add(tf.keras.layers.Flatten())
    model.add(tf.keras.layers.Dense(512))
    model.add(tf.keras.layers.Activation(activation=tf.nn.relu6, name="ReLU6_1"))
    model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax, name="OutputLayer"))
    return model

class ModelData(object):
    INPUT_NAME = "InputLayer"
    INPUT_SHAPE = (MNIST_CHANNELS, MNIST_IMAGE_SIZE, MNIST_IMAGE_SIZE)
    RELU6_NAME = "ReLU6_1"
    OUTPUT_NAME = "OutputLayer/Softmax"
    OUTPUT_SHAPE = (MNIST_IMAGE_SIZE, )
    DATA_TYPE = trt.float32

In this case, it is only one relu6. So now supposes that there are two or over two relu6 in the neural network, and I also named those relu6 being different names.

def build_model():
    model = tf.keras.models.Sequential()
    model.add(tf.keras.layers.InputLayer(input_shape=[1, 28, 28], name="InputLayer"))
    model.add(tf.keras.layers.Flatten())
    model.add(tf.keras.layers.Dense(512))
    model.add(tf.keras.layers.Activation(activation=tf.nn.relu6, name="ReLU6"))
    model.add(tf.keras.layers.Dense(512))
    model.add(tf.keras.layers.Activation(activation=tf.nn.relu6, name="ReLU6_1"))
    model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax, name="OutputLayer"))
    return model

one is ReLU6, and the other is ReLU6_1.

How can I correctly set the parameters in the ModelData and prepare_namespace_plugin_map?

As my thought, I will add the second ReLU6_1 in the ModelData and namespace_plugin_map.

class ModelData(object):
    INPUT_NAME = "InputLayer"
    INPUT_SHAPE = (MNIST_CHANNELS, MNIST_IMAGE_SIZE, MNIST_IMAGE_SIZE)
[b]    RELU6_NAME = "ReLU6"
    RELU6_1_NAME = "ReLU6_1"[/b]
    OUTPUT_NAME = "OutputLayer/Softmax"
    OUTPUT_SHAPE = (MNIST_IMAGE_SIZE, )
    DATA_TYPE = trt.float32

Although it showed that it works well, I don’t know whether it is real or not.
Because probably it only works for first one ReLU6, not works for second one.

My questions are

  1. Should I need to add the ModelData.RELU6_1_NAME in the namespace_plugin_map? such as ``` namespace_plugin_map = { ModelData.RELU6_NAME: trt_relu6, ModelData.RELU6_1_NAME: trt_relu6 } ```

    I tried it before, but I got this error.

    [TensorRT] ERROR: UffParser: Graph error: Cycle graph detected
    [TensorRT] ERROR: Network must have at least one output
    [TensorRT] ERROR: Network validation failed.
    Traceback (most recent call last):
      File "sample.py", line 208, in <module>
        main()
      File "sample.py", line 196, in main
        with build_engine(MODEL_PATH) as engine:
    AttributeError: __enter__
    

    I am confused here…
    How can I correct it?

  2. suppose today we need to add each of relu6 in the ModelData, and then if there are 50 relu6 in a neural network, should we really need to list all names of relu6 in the ModelData? For example,
    class ModelData(object):
        INPUT_NAME = "InputLayer"
        INPUT_SHAPE = (MNIST_CHANNELS, MNIST_IMAGE_SIZE, MNIST_IMAGE_SIZE)
    [b]    RELU6_NAME = "ReLU6"
        RELU6_1_NAME = "ReLU6_1"
        RELU6_2_NAME = "ReLU6_2"
        RELU6_3_NAME = "ReLU6_3"
        RELU6_4_NAME = "ReLU6_4"
                     .
                     .
                     .
        RELU6_49_NAME = "ReLU6_49"[/b]
        OUTPUT_NAME = "OutputLayer/Softmax"
        OUTPUT_SHAPE = (MNIST_IMAGE_SIZE, )
        DATA_TYPE = trt.float32
    

    However, I don’t think so that it is using this way…

I got stuck in this part for many days that I want to convert .pb to uff…
Can you give me some advice or help?

Thank you very much!!

Hi Chieh,

You can do something like this:

def prepare_namespace_plugin_map(dynamic_graph):
    # In this sample, the only operation that is not supported by TensorRT
    # is tf.nn.relu6, so we create a new node which will tell UffParser which
    # plugin to run and with which arguments in place of tf.nn.relu6.
    nodes=[n.name for n in dynamic_graph.as_graph_def().node]
    ns={}
    for node in nodes:
        print(node)
        if "relu6" in node.lower():
            ns[node] = gs.create_plugin_node(name=node, op="CustomClipPlugin", clipMin=0.0, clipMax=6.0)
    return ns

And you don’t have to bother specifying every single ReLU6 layer name if you don’t want to. Keras will assign unique names containing “relu6” if you leave it unspecified:

model.add(tf.keras.layers.Activation(activation=tf.nn.relu6, name="ReLU6"))
model.add(tf.keras.layers.Activation(activation=tf.nn.relu6))#, name="ReLU6"))
model.add(tf.keras.layers.Activation(activation=tf.nn.relu6))#, name="ReLU6"))

which creates nodes named:

ReLU6/Relu6
activation/Relu6
activation_1/Relu6

I did a quick test using /opt/tensorrt/samples/python/uff_custom_plugin/:

Here’s my modified model in lenet5.py:

def build_model():
    # Create the keras model
    model = tf.keras.models.Sequential()
    model.add(tf.keras.layers.InputLayer(input_shape=[1, 28, 28], name="InputLayer"))
    model.add(tf.keras.layers.Flatten())
    model.add(tf.keras.layers.Dense(512))
    model.add(tf.keras.layers.Activation(activation=tf.nn.relu6, name="ReLU6"))
    model.add(tf.keras.layers.Activation(activation=tf.nn.relu6))#, name="ReLU6"))
    model.add(tf.keras.layers.Activation(activation=tf.nn.relu6))#, name="ReLU6"))
    model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax, name="OutputLayer"))
    return model

And running sample.py with graphsurgeon modifications above:

...
Converting to UFF graph
Warning: No conversion function registered for layer: CustomClipPlugin yet.
Converting activation_1/Relu6 as custom op: CustomClipPlugin
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:179: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

Warning: No conversion function registered for layer: CustomClipPlugin yet.
Converting activation/Relu6 as custom op: CustomClipPlugin
Warning: No conversion function registered for layer: CustomClipPlugin yet.
Converting ReLU6/Relu6 as custom op: CustomClipPlugin
DEBUG: convert reshape to flatten node
DEBUG [/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:96] Marking ['OutputLayer/Softmax'] as outputs
No. nodes: 15
UFF Output written to /workspace/tensorrt/samples/python/uff_custom_plugin/models/trained_lenet5.uff
UFF Text Output written to /workspace/tensorrt/samples/python/uff_custom_plugin/models/trained_lenet5.pbtxt

=== Testing ===
Loading Test Case: 3
Prediction: 3

Hi NVES_R,

Many thanks for your explanation!

As your descriptions, it looks like that I don’t need to care about node’s names in the ModelData area, right? Because if we can directly give a correct name to gs.create_plugin_node(), we don’t need to write each of node’s names in the ModelData, and also don’t need to provide the namespace_plugin_map.

You helped me a lot.
Thank you!

Sincerely,
Chieh

ModelData is just a class to conveniently keep a bunch of that data (name, shape, etc.) together in a way that makes sense - It’s not necessary, just good practice. For all of the RELU names, they definitely don’t have to be part of the class.

Happy to help!