Error while converting tf grapf to tensorrt (UFFParser: Parser error: Maximum: Unsupported binary op max with constant right)


Getting uff parsing errors while converting from tf to tensort. I tried conversion using the scripts provided by :


TensorRT Version:
GPU Type: RTX 2080
Nvidia Driver Version: 440.33.01
CUDA Version:10.2
Baremetal or Container (if container which image + tag):,

I have a custom tensorflow model which I want to convert to tensorrt plan for obvious reasons but am facing issues. I cloned the NVIDIA-AI-IOT/tf_to_trt_image_classification repo in the above-given containers and used the scripts/ but it throws the following errors-

UFFParser: Parsing Maximum[Op: Binary]. Inputs: Maximum/x, bn0/add_1
UffParser: Parser error: Maximum: Unsupported binary op max with constant right
Failed to parse UFF

I tried using tensorrt python API for conversion under the section Importing From TensorFlow Using Python, Building An Engine In Python and further Serializing A Model In Python section.
The uff conversion happens properly but throws: Segmentation fault (core dumped)
when parsing uff here is the code snippet-
import tensorrt as trt

TRT_LOGGER = trt.Logger(trt.Logger.WARNING)

model_file = "/home/neil/uff/test_.uff"

with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:

        parser.register_input("data", (1, 112, 112))


parser.parse(model_file, network)

I tried different version of tf container for conversion to check if there is a compatibility issue in newer version. As suggested by the op Maximum is supported for tensorrt. I tried 3.4. TensorFlow Container 18.11-19.01 (TensorFlow 1.12) specifically as the Maximum op is listed in supported ops.


Hi, UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.

Please check the below link for the same.


Will try that and check if it works for my model.


For converting from TensorFlow to onnx I used tf2onnx:

(Note: Do remember to use --inputs-as-nchw otherwise by default the model input is as hwc format, which throws an issue while using it in deepstream)

For onnx to tensorrt I tried using the above given link (onnx2trt) but it threw an error:

[2020-12-28 18:00:51 ERROR] Network has dynamic or shape inputs, but no optimization profile has been defined.

The better way to do it using trtexec which provides a good number of tunable parameters for running inference, performance benchmarking and we can save to trt engine also.
Here is the link:


Hi @duttaneil16,

You have to use optimization profile.

sample is here.