Relu6 not supported in TensorRT for mobilenetv2

Hi,

I tried to run Mobilenet V2 with TensorRT with uff file and it gave me below error RELU6 not supported. Seeing the documentation, TensorRT doesn’t support RELU6 as of now but the sampleUffSSD example shows Relu6 is replaced by Relu(x) - Relu(x-6).

adit@gibson2:~/Downloads/mobilenet_v2_1.4_224$ ~/Downloads/TensorRT-4.0.1.6/bin/trtexec --uff=mobilenet_v2_1.4_224_frozen.pb.uff --output=MobilenetV2/Predictions/Reshape_1 --uffInput=input,3,224,224
uff: mobilenet_v2_1.4_224_frozen.pb.uff
output: MobilenetV2/Predictions/Reshape_1
uffInput: input,3,224,224
UFFParser: Validator error: MobilenetV2/expanded_conv_1/expand/Relu6: Unsupported operation _Relu6
Engine could not be created
Engine could not be created

I created the config.py file similar to samplesUffSSD example as which pre-processes RElu6 operation but unfortunately it doesnt work.

  1. Created config.py

import graphsurgeon as gs
import tensorflow as tf

Input = gs.create_node(“input”,
op=“Placeholder”,
dtype=tf.float32,
shape=[1, 3, 224, 224])

namespace_plugin_map = {
“Preprocessor”: Input,
“ToFloat”: Input,
“input”: Input,
}

def preprocess(dynamic_graph):
# Now create a new graph by collapsing namespaces
dynamic_graph.collapse_namespaces(namespace_plugin_map)
# Remove the outputs, so we just have a single output node (NMS).
#dynamic_graph.remove(dynamic_graph.graph_outputs, remove_exclusive_dependencies=False)
~

  1. Generated uff file

convert-to-uff tensorflow --input-file mobilenet_v2_1.4_224_frozen.pb -O MobilenetV2/Predictions/Reshape_1 -p config.py

  1. Ran inference

adit@gibson2:~/Downloads/mobilenet_v2_1.4_224$ ~/Downloads/TensorRT-4.0.1.6/bin/trtexec --uff=mobilenet_v2_1.4_224_frozen.pb.uff --output=MobilenetV2/Predictions/Reshape_1 --uffInput=input,3,224,224
uff: mobilenet_v2_1.4_224_frozen.pb.uff
output: MobilenetV2/Predictions/Reshape_1
uffInput: input,3,224,224
UFFParser: Validator error: MobilenetV2/expanded_conv_1/expand/Relu6: Unsupported operation _Relu6
Engine could not be created
Engine could not be created

Can anyone let me know how the UFFParser does the pre-processing on Relu6 ?

Thanks in advance

Any progress about this issue?
did TensorRT support Relu6?

Hello,

It seems RElu6 is not supported in TEnsorRT but they somehow pre-process RElu6 as RElu(x) - Relu(x-6) in sampleUffSSD example through a config file.
I still don’t understand how can I do it…Still searching how to solve it.

I am also confused by the conversion of relu6 – in the documents “4.1.2. Example 2: Adding A Custom Layer That Is Not Supported In UFF Using C++”, it is suggested that relu6 should be replaced by customized layer, and declared in config.py. However, in the example of sampleUffSSD, it seems that relu6 is directly taken care of by convert-to-uff.