Getting error while converting custom model using faster rcnn resnet 50 to tensor rt engine using tensor rt 5.0

I am using tensorlfow slim to detect objects using faster rcnn resnet 50 architecture.
Tensorflow version used is 1.8.0

I have downloaded and installed Tensor RT version 5.0. to quantize and create a trt engine for inferencing.
while converting a custom model(.pb) created using faster rcnn resnet 50 to UFF, I am facing below error:
“uff.model.exceptions.UffException: Transpose permutation has op Sub, expected Const. Only constant permuations are supported in UFF”

Getting this error even-though all variables are converted to const while creating the .pb file

using the convert_to_uff.py file which is available in TensorRT.

other information of system:
CUDA version 9.0
CuDNN version 5.1.10
ubuntu linux 16.04

Hello,

to help us debug, can you please share the .pb file that demonstrating the “expected const” error during parsing?

thanks
NVIDIA Enterprise Support

I have the same problem.
other information of system:
CUDA version 9.0.176
CuDNN version 7.0.4
ubuntu linux 16.04
Tensorflow:1.7

ai@ai:/graph$ convert-to-uff frozen_inference_graph.pb
Loading frozen_inference_graph.pb
UFF Version 0.5.5
=== Automatically deduced input nodes ===
[name: “image_tensor”
op: “Placeholder”
attr {
key: “dtype”
value {
type: DT_UINT8
}
}
attr {
key: “shape”
value {
shape {
dim {
size: -1
}
dim {
size: -1
}
dim {
size: -1
}
dim {
size: 3
}
}
}
}
]

=== Automatically deduced output nodes ===
[name: “SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack/TensorArrayGatherV3”
op: “TensorArrayGatherV3”
input: “SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArray_5”
input: “SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack/range”
input: “SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/Exit_1”
attr {
key: “_class”
value {
list {
s: “loc:@SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArray_5”
}
}
}
attr {
key: “dtype”
value {
type: DT_FLOAT
}
}
attr {
key: “element_shape”
value {
shape {
dim {
size: 300
}
dim {
size: 4
}
}
}
}
, name: “SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_1/TensorArrayGatherV3”
op: “TensorArrayGatherV3”
input: “SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArray_6”
input: “SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_1/range”
input: “SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/Exit_2”
attr {
key: “_class”
value {
list {
s: “loc:@SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArray_6”
}
}
}
attr {
key: “dtype”
value {
type: DT_FLOAT
}
}
attr {
key: “element_shape”
value {
shape {
dim {
size: 300
}
}
}
}
, name: “SecondStagePostprocessor/ToFloat_1”
op: “Cast”
input: “SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_4/TensorArrayGatherV3”
attr {
key: “DstT”
value {
type: DT_FLOAT
}
}
attr {
key: “SrcT”
value {
type: DT_INT32
}
}
, name: “add”
op: “Add”
input: “SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_2/TensorArrayGatherV3”
input: “add/y”
attr {
key: “T”
value {
type: DT_FLOAT
}
}
]

Using output node SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack/TensorArrayGatherV3
Using output node SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_1/TensorArrayGatherV3
Using output node SecondStagePostprocessor/ToFloat_1
Using output node add
Converting to UFF graph
Warning: No conversion function registered for layer: TensorArrayGatherV3 yet.
Converting SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_2/TensorArrayGatherV3 as custom op: TensorArrayGatherV3
Warning: No conversion function registered for layer: Exit yet.
Converting SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/Exit_3 as custom op: Exit
Warning: No conversion function registered for layer: Switch yet.
Converting SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/Switch_3 as custom op: Switch
Warning: No conversion function registered for layer: LoopCond yet.
Converting SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/LoopCond as custom op: LoopCond
Warning: No conversion function registered for layer: Less yet.
Converting SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/Less as custom op: Less
Warning: No conversion function registered for layer: Enter yet.
Converting SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/Less/Enter as custom op: Enter
DEBUG: convert reshape to flatten node
Warning: keepdims is ignored by the UFF Parser and defaults to True
Warning: No conversion function registered for layer: CropAndResize yet.
Converting CropAndResize as custom op: CropAndResize
Warning: No conversion function registered for layer: ExpandDims yet.
Converting ExpandDims_6 as custom op: ExpandDims
Warning: No conversion function registered for layer: Range yet.
Converting range as custom op: Range
Warning: No conversion function registered for layer: TensorArrayGatherV3 yet.
Converting map/TensorArrayStack/TensorArrayGatherV3 as custom op: TensorArrayGatherV3
Warning: No conversion function registered for layer: Exit yet.
Converting map/while/Exit_1 as custom op: Exit
Warning: No conversion function registered for layer: Switch yet.
Converting map/while/Switch_1 as custom op: Switch
Warning: No conversion function registered for layer: LoopCond yet.
Converting map/while/LoopCond as custom op: LoopCond
Warning: No conversion function registered for layer: Less yet.
Converting map/while/Less as custom op: Less
Warning: No conversion function registered for layer: Enter yet.
Converting map/while/Less/Enter as custom op: Enter
Warning: No conversion function registered for layer: TensorArrayGatherV3 yet.
Converting BatchMultiClassNonMaxSuppression/map/TensorArrayStack/TensorArrayGatherV3 as custom op: TensorArrayGatherV3
Warning: No conversion function registered for layer: Exit yet.
Converting BatchMultiClassNonMaxSuppression/map/while/Exit_1 as custom op: Exit
Warning: No conversion function registered for layer: Switch yet.
Converting BatchMultiClassNonMaxSuppression/map/while/Switch_1 as custom op: Switch
Warning: No conversion function registered for layer: LoopCond yet.
Converting BatchMultiClassNonMaxSuppression/map/while/LoopCond as custom op: LoopCond
Warning: No conversion function registered for layer: Less yet.
Converting BatchMultiClassNonMaxSuppression/map/while/Less as custom op: Less
Warning: No conversion function registered for layer: Enter yet.
Converting BatchMultiClassNonMaxSuppression/map/while/Less/Enter as custom op: Enter
Warning: No conversion function registered for layer: ExpandDims yet.
Converting ExpandDims_4 as custom op: ExpandDims
Warning: No conversion function registered for layer: ExpandDims yet.
Converting ExpandDims_1 as custom op: ExpandDims
Warning: No conversion function registered for layer: TensorArrayGatherV3 yet.
Converting Preprocessor/map/TensorArrayStack/TensorArrayGatherV3 as custom op: TensorArrayGatherV3
Warning: No conversion function registered for layer: Exit yet.
Converting Preprocessor/map/while/Exit_1 as custom op: Exit
Warning: No conversion function registered for layer: Switch yet.
Converting Preprocessor/map/while/Switch_1 as custom op: Switch
Warning: No conversion function registered for layer: LoopCond yet.
Converting Preprocessor/map/while/LoopCond as custom op: LoopCond
Warning: No conversion function registered for layer: Less yet.
Converting Preprocessor/map/while/Less as custom op: Less
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/Less/Enter as custom op: Enter
Warning: No conversion function registered for layer: Cast yet.
Converting ToFloat_3 as custom op: Cast
Warning: No conversion function registered for layer: Merge yet.
Converting Preprocessor/map/while/Merge as custom op: Merge
Warning: No conversion function registered for layer: NextIteration yet.
Converting Preprocessor/map/while/NextIteration as custom op: NextIteration
Warning: No conversion function registered for layer: Switch yet.
Converting Preprocessor/map/while/Switch as custom op: Switch
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/Enter as custom op: Enter
Warning: No conversion function registered for layer: Merge yet.
Converting Preprocessor/map/while/Merge_1 as custom op: Merge
Warning: No conversion function registered for layer: NextIteration yet.
Converting Preprocessor/map/while/NextIteration_1 as custom op: NextIteration
Warning: No conversion function registered for layer: TensorArrayWriteV3 yet.
Converting Preprocessor/map/while/TensorArrayWrite/TensorArrayWriteV3 as custom op: TensorArrayWriteV3
Warning: No conversion function registered for layer: ResizeBilinear yet.
Converting Preprocessor/map/while/ResizeToRange/resize_images/ResizeBilinear as custom op: ResizeBilinear
Warning: No conversion function registered for layer: TensorArrayReadV3 yet.
Converting Preprocessor/map/while/TensorArrayReadV3 as custom op: TensorArrayReadV3
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/TensorArrayReadV3/Enter_1 as custom op: Enter
Warning: No conversion function registered for layer: TensorArrayScatterV3 yet.
Converting Preprocessor/map/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3 as custom op: TensorArrayScatterV3
Warning: No conversion function registered for layer: TensorArrayV3 yet.
Converting Preprocessor/map/TensorArray as custom op: TensorArrayV3
Warning: No conversion function registered for layer: Range yet.
Converting Preprocessor/map/TensorArrayUnstack/range as custom op: Range
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/TensorArrayReadV3/Enter as custom op: Enter
Warning: No conversion function registered for layer: Unpack yet.
Converting Preprocessor/map/while/ResizeToRange/unstack as custom op: Unpack
Warning: No conversion function registered for layer: Merge yet.
Converting Preprocessor/map/while/ResizeToRange/cond/Merge as custom op: Merge
Warning: No conversion function registered for layer: Switch yet.
Converting Preprocessor/map/while/ResizeToRange/cond/Switch_1 as custom op: Switch
Warning: No conversion function registered for layer: Greater yet.
Converting Preprocessor/map/while/ResizeToRange/Greater as custom op: Greater
Warning: No conversion function registered for layer: Cast yet.
Converting Preprocessor/map/while/ResizeToRange/ToFloat_2 as custom op: Cast
Warning: keepdims is ignored by the UFF Parser and defaults to True
Warning: No conversion function registered for layer: Cast yet.
Converting Preprocessor/map/while/ResizeToRange/ToInt32_1 as custom op: Cast
Warning: No conversion function registered for layer: Round yet.
Converting Preprocessor/map/while/ResizeToRange/Round_1 as custom op: Round
Warning: No conversion function registered for layer: Cast yet.
Converting Preprocessor/map/while/ResizeToRange/ToFloat_1 as custom op: Cast
Warning: No conversion function registered for layer: Cast yet.
Converting Preprocessor/map/while/ResizeToRange/ToFloat as custom op: Cast
Warning: No conversion function registered for layer: Cast yet.
Converting Preprocessor/map/while/ResizeToRange/ToInt32 as custom op: Cast
Warning: No conversion function registered for layer: Round yet.
Converting Preprocessor/map/while/ResizeToRange/Round as custom op: Round
Warning: No conversion function registered for layer: Cast yet.
Converting Preprocessor/map/while/ResizeToRange/ToInt32_3 as custom op: Cast
Warning: No conversion function registered for layer: Round yet.
Converting Preprocessor/map/while/ResizeToRange/Round_3 as custom op: Round
Warning: No conversion function registered for layer: Cast yet.
Converting Preprocessor/map/while/ResizeToRange/ToInt32_2 as custom op: Cast
Warning: No conversion function registered for layer: Round yet.
Converting Preprocessor/map/while/ResizeToRange/Round_2 as custom op: Round
Warning: No conversion function registered for layer: Switch yet.
Converting Preprocessor/map/while/ResizeToRange/cond/Switch_2 as custom op: Switch
Warning: No conversion function registered for layer: ExpandDims yet.
Converting Preprocessor/map/while/ResizeToRange/resize_images/ExpandDims as custom op: ExpandDims
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/TensorArrayWrite/TensorArrayWriteV3/Enter as custom op: Enter
Warning: No conversion function registered for layer: TensorArrayV3 yet.
Converting Preprocessor/map/TensorArray_1 as custom op: TensorArrayV3
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/Enter_1 as custom op: Enter
Warning: No conversion function registered for layer: Range yet.
Converting Preprocessor/map/TensorArrayStack/range as custom op: Range
Warning: No conversion function registered for layer: TensorArraySizeV3 yet.
Converting Preprocessor/map/TensorArrayStack/TensorArraySizeV3 as custom op: TensorArraySizeV3
Traceback (most recent call last):
File “/home/ai/anaconda3/bin/convert-to-uff”, line 11, in
sys.exit(main())
File “/home/ai/anaconda3/lib/python3.5/site-packages/uff/bin/convert_to_uff.py”, line 89, in main
debug_mode=args.debug
File “/home/ai/anaconda3/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py”, line 187, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “/home/ai/anaconda3/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py”, line 157, in from_tensorflow
debug_mode=debug_mode)
File “/home/ai/anaconda3/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py”, line 94, in convert_tf2uff_graph
uff_graph, input_replacements, debug_mode=debug_mode)
File “/home/ai/anaconda3/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py”, line 79, in convert_tf2uff_node
op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes, debug_mode=debug_mode)
File “/home/ai/anaconda3/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py”, line 47, in convert_layer
return cls.registry_[op](name, tf_node, inputs, uff_graph, **kwargs)
File “/home/ai/anaconda3/lib/python3.5/site-packages/uff/converters/tensorflow/converter_functions.py”, line 184, in convert_transpose
raise UffException("Transpose permutation has op " + str(tf_permutation_node.op) + “, expected Const. Only constant permuations are supported in UFF.”)
uff.model.exceptions.UffException: Transpose permutation has op Sub, expected Const. Only constant permuations are supported in UFF.

@clh2007, can you share your .pb file to help us debug?

link is .pb file structure graph
https://pan.baidu.com/s/1xYjwSCY8wupvbLmVW0hiqA

@clh2007,

Thank you for sharing the .pb file. We are not able to install Baidu client to download the file. Can you please use google drive or dropbox?

regards,
NVES.

The .pb file has been uploaded .
https://drive.google.com/file/d/1FAou0LhcKBwz8uDEG6-_5v9Q4opYHOXd/view?usp=sharing

I had a similar error and was able to solve it.

It looks like calling tf.nn.softmax with an axis argument other than -1 introduces non-constant permutation ops, and doing the transpose manually avoids that.

For me this meant changing

result = tf.nn.softmax(tensor, axis=1)

to

result = tf.transpose(tensor, (0, 2, 3, 1))
result = tf.nn.softmax(result, axis=-1)
result = tf.transpose(result, (0, 3, 1, 2))

Thank you! But I find all the scripts, using tf.nn.softmax with an axis argument to use the default value, and don’t set tf.nn.softmax(tensor, axis=1)

@clh2007, looks like you are using a preprocessing script when converting - can you share that with us as well?

As for the error you are seeing, per engineering, converting variables to constants only affects ops of type Variable or VariableV2. A construct like this:

a = tf.const(...)
b = tf.const(...)
sub = tf.subtract(a, b)
perm = tf.transpose(sub, ...)

will not work, since the input to the transpose is not know at build time. Using the constfold optimizer would solve the issue, assuming that the permutation can be computed before inference time

1 Like

I am also having the same issue with FasterRCNN conversion from .pb file to UFF.

I also have the same problem while converting BiLSTM-based .pb file to UFF.
It shows that:
raise UffException("Transpose permutation has op " + str(tf_permutation_node.op) + “, expected Const. Only constant permuations are supported in UFF.”)
uff.model.exceptions.UffException: Transpose permutation has op ConcatV2, expected Const. Only constant permuations are supported in UFF.

Does it have anyproblem with my code using the function tf.transpose()?
My codes are as follows:

outputs, _ = tf.nn.bidirectional_dynamic_rnn(lstm_fw_cell, lstm_bw_cell, self.embedded_chars, dtype=tf.float32)
 output_rnn = tf.concat(outputs, axis=2)  # [batch_size, sequence_length, hidden_size*2]
        with tf.name_scope("attention"):
            M = tf.layers.dense(output_rnn,hidden_size*2)
            W_a = tf.Variable(tf.random_normal([hidden_size*2], stddev=0.1))
            alpha = tf.nn.softmax(tf.reshape(tf.matmul(tf.reshape(M, [-1, hidden_size * 2]), tf.reshape(W_a, [-1, 1])),[-1,sequence_length]))
            r = tf.matmul(tf.transpose(output_rnn, [0, 2, 1]), tf.reshape(alpha, [-1, sequence_length, 1]))
            r = tf.squeeze(r, [2])
            self.h_star = tf.tanh(r)

Will this problem be solved by the constfold optimizer ,too?
And could you please give more details on constfold optimizer?

Thanks.

@Bakhn,Excuse me, have you solved this problem?

Hi,

I get the same error message as others when trying to convert multires.pb from this file: https://github.com/DIUx-xView/baseline/releases/download/v1.1/models_release_v1-1.zip

Would the constfold optimizer solve my issue in my case? If so, could you point me to documentation on it (could not find anything on it), and otherwise could you tell me if this is a limitation of TensorRT or if there is another way to fix this issue?

Any update ? Did you successfully convert faster rcnn resnet 50 to uff?

Thanks,

I also have the same error message with Faster RCNN model when i convert .pb file to UFF.
Did you solve this problem?

I also have the same error message when i convert .pb file to uff. I solved this problem using Tensorflow Constant folding optimizer. The code are as follows, this method can be tried.

import contextlib
import tensorflow as tf

@contextlib.contextmanager
def options(options):
    old_opts = tf.config.optimizer.get_experimental_options()
    tf.config.optimizer.set_experimental_options(options)
    try:
        yield
    finally:
        tf.config.optimizer.set_experimental_options(old_opts)
……
……
……
with options({'constant_folding': True}):
    ……   #your code
    tf.transpose(……)  #your code
    ……   #your code

can u give me full code

my error:

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_qint8 = np.dtype([(“qint8”, np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_quint8 = np.dtype([(“quint8”, np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:521: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_qint16 = np.dtype([(“qint16”, np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:522: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_quint16 = np.dtype([(“quint16”, np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_qint32 = np.dtype([(“qint32”, np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
np_resource = np.dtype([(“resource”, np.ubyte, 1)])
Using TensorFlow backend.
Converting…
NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
UFF Version 0.6.5
=== Automatically deduced input nodes ===
[name: “image_tensor”
op: “Placeholder”
attr {
key: “dtype”
value {
type: DT_UINT8
}
}
attr {
key: “shape”
value {
shape {
dim {
size: -1
}
dim {
size: -1
}
dim {
size: -1
}
dim {
size: 3
}
}
}
}
]

Using output node [out1,out2,out3]
Converting to UFF graph
Traceback (most recent call last):
File “convert.py”, line 14, in
trt_graph = uff.from_tensorflow_frozen_model(filename, output_nodes=[output_node])
File “/usr/local/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 229, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 178, in from_tensorflow
debug_mode=debug_mode)
File “/usr/local/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py”, line 94, in convert_tf2uff_graph
uff_graph, input_replacements, debug_mode=debug_mode)
File “/usr/local/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py”, line 62, in convert_tf2uff_node
raise UffException(str(name) + " was not found in the graph. Please use the -l option to list nodes in the graph.")
uff.model.exceptions.UffException: [out1,out2,out3] was not found in the graph. Please use the -l option to list nodes in the graph.