Unary Layer in TensorRT 3.0 RC

I am using TensorRT 3.0 RC via the Python API. I am trying to create an inference engine for a simple network created in Keras using the Lite Engine and following the steps in 2.3.2.2.2. and 2.7.1.1. in the TensorRT 3 User Guide.

This works as long as I don’t use a Unary Layer, like a simple Abs

x = keras.layers.Lambda(lambda t: K.abs(t))(x)

in which case I get an error from the UFF Parser:

[TensorRT] ERROR: UFFParser: Parser error: lambda_1/Abs: Unary not supported for other non-constant node
[TensorRT] ERROR: Failed to parse UFF model stream

Any suggestions what the problem might be?

Hi,

Not all use-case is supported by TensorRT.
We are checking this use-case. Update information later.

Thanks.

In general, using a Lambda layer is not a great idea, because it requires stopping the GPU, moving the data from the GPU to the CPU (or, in case of unified buffers, mapping it for CPU usage,) and then running Python on it. Python is inherently very slow. Once it’s done, the process has to be reversed again, the output result uploaded to the GPU, and the inference continued.
This will lead to quite poor performance overall in most use cases. It’s better to stick to the layer types that are supported by TensorRT (or whatever other native framework you’re using.)

Hi,

This error may cause by lambda wrapper.
Could you check if it can work correctly without lambda wrapper?

Thanks.

TensorRT] ERROR: UFFParser: Parser error: Generator/conv1_1/lrelu/Abs: Unary not supported for other non-constant node
[TensorRT] ERROR: Failed to parse UFF model stream

Hi,

Although TensorRT has a unary layer, UFF parser doesn’t support it.

This feature will be enabled on our next release.
Please wait for our update and announcement.

Thanks.

Lol… Why even release TensorRT3 is such an unfinished state?

Hi,

Sorry for the inconvenience.

TensorRT 3 targets for the TensorFlow model support.
The extra plugin API feature will be enabled in our future release.

Please pay attention to our announcement.
Thanks.

@AastaLLL

I also see that Reduce is unsupported:

[ERROR] UFFParser: Parser error: scale1/moments/mean: Reduce operator not supported
ASSERT(parser->parse(uffFile.c_str(), network, nvinfer::DataType::kFLOAT)) failed at ros/src/tensorrt/EngineBuilder.cpp:119
Backtrace: 
main in ??:0
__libc_start_main in /build/eglibc-SvCtMH/eglibc-2.19/csu/libc-start.c:321
_start in ??:0

This means all these operations are unsupported?

@tf2uff.register("Sum")
def convert_sum(name, tf_node, inputs, uff_graph, **kwargs):
    return _reduce_helper(name, tf_node, inputs, uff_graph, func="sum", **kwargs)


@tf2uff.register("Prod")
def convert_prod(name, tf_node, inputs, uff_graph, **kwargs):
    return _reduce_helper(name, tf_node, inputs, uff_graph, func="prod", **kwargs)


@tf2uff.register("Min")
def convert_min(name, tf_node, inputs, uff_graph, **kwargs):
    return _reduce_helper(name, tf_node, inputs, uff_graph, func="min", **kwargs)


@tf2uff.register("Max")
def convert_max(name, tf_node, inputs, uff_graph, **kwargs):
    return _reduce_helper(name, tf_node, inputs, uff_graph, func="max", **kwargs)


@tf2uff.register("Mean")
def convert_mean(name, tf_node, inputs, uff_graph, **kwargs):
    return _reduce_helper(name, tf_node, inputs, uff_graph, func="mean", **kwargs)


@tf2uff.register("Squeeze")
def convert_squeeze(name, tf_node, inputs, uff_graph, **kwargs):
    axis = tf2uff.get_tf_int_list(tf_node.attr['squeeze_dims'])
    uff_graph.squeeze(inputs[0], name=name, axis=axis)
    return [tf2uff.split_node_name_and_output(inp)[0] for inp in inputs]

Will they be enabled in the next release?

Hi,

Reduce layer will be supported in our future release but we can’t disclsoure schedule information.
Squeeze is supported in TensorRT GA but is only available for desktop user currently.
Add, Sub, Mul, Div, Minimum and Maximum are supported by UFF Binary layer.

Check our document for details:
>> 2.3.2.2.4. Supported TensorFlow Operations

Thanks.

The problem is that the documentations isn’t accurate and that different parts of the TensorRT toolchain are inconsistent.

Why does the UFF conversion not give an “unsupported warning” for the reduce operators but the parser fails? The docs actually claim to support it: “Mean is converted into a UFF Reduce layer.”

Unary operations are also claimed to be supported by the docs but simply aren’t:

TensorRT docs 2.3.2.2.4 “Supported TensorFlow operations”
“Negative, Abs, Sqrt, Rsqrt, Pow, Exp and Log are converted into a UFF Unary layer.”

Nvidia Rep:
“Although TensorRT has a unary layer, UFF parser doesn’t support it.”

Hi,

Currently, we only support Unary operator on constant folding in UFF parser.
Output tensor will be supported in our future release.

Your comment is feedbacked to our internal team and will help us improve the document clearer.

Thanks.

I am using TensorRT 3.0.4 via the Python API. I am trying to create an inference engine for a simple network created in tensorflow,

I get an error from the UFF Parser:

[TensorRT] ERROR: UFFParser: Parser error: conv1/Abs: Unary not supported for other non-constant node
[TensorRT] ERROR: Failed to parse UFF model stream

Does TensorRT 3.0.4 still not supported Abs?

Hi,

Unary op is supported by TensorRT but not for uff parser in TensorRT 3.0. (Only available for Caffe and API user)

We already enable unary support in UFF with TensorRT 4.0
https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#tfops
Please pay attention to our announcement for the release of Jetson.

Thanks.

I tried to create engine with Unary op in TensorRT 4.0.
But, the error is still occured.

We already enable unary support in UFF with TensorRT 4.0

Is this correct?

My simple testing code is here.

import tensorflow as tf
import uff
import tensorrt as trt
from tensorrt.parsers import uffparser

sess = tf.Session()
inputs_ = tf.placeholder(tf.float32, [1, 1, 1], name="inputs_")
sqrt = tf.sqrt(inputs_, name="sqrt")

uff_model = uff.from_tensorflow(sess.graph_def, ["sqrt"])
G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.ERROR)
parser = uffparser.create_uff_parser()
parser.register_input("inputs_", (1, 1, 1), 0)
parser.register_output("sqrt")

engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1<<25)

And the error message is also

[TensorRT] ERROR: UFFParser: Parser error: sqrt: Unary not supported for other non-constant node
[TensorRT] ERROR: Failed to parse UFF model stream

When “tf.sqrt” replaces to non-Unary op(for example, tf.square), it works well.

Hi,

We can reproduce this issue on our environment.

Let us check this issue with our internal team.
Will update information with you later.

Thanks a lot for the feedback.

Is there any update on this? I am running into the same error with freshly installed TensorRT.

Hi,

Our Uff parser currently only supports sqrt for the constant nodes.

This issue is feed-backed and now is priorited by our internal team.
We will try to accommodate in our next release.

Thanks.

Hi,

I am using tensort 4.0.1.6 , While parsing keras model using UFF Parser got the following error:

[TensorRT] ERROR: UFFParser: Parser error: main_output/Exp: Unary not supported for other non-constant node
[TensorRT] ERROR: Failed to parse UFF model stream

Hi,

This issue is fixed but not available for TX2 yet.
Please wait for our JetPack next release.

Thanks.