Error while converting tf grapf to tensorrt (UFFParser: Parser error: Maximum: Unsupported binary op max with constant right)

Description

Getting uff parsing errors while converting from tf to tensort. I tried conversion using the scripts provided by :

Environment

TensorRT Version:
GPU Type: RTX 2080
Nvidia Driver Version: 440.33.01
CUDA Version:10.2
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorflow:20.03-tf1-py3, nvcr.io/nvidia/tensorflow:19.01-py3

Hi,
I have a custom tensorflow model which I want to convert to tensorrt plan for obvious reasons but am facing issues. I cloned the NVIDIA-AI-IOT/tf_to_trt_image_classification repo in the above-given containers and used the scripts/convert_plan.py but it throws the following errors-

UFFParser: Parsing Maximum[Op: Binary]. Inputs: Maximum/x, bn0/add_1
UffParser: Parser error: Maximum: Unsupported binary op max with constant right
Failed to parse UFF

I tried using tensorrt python API for conversion under the section Importing From TensorFlow Using Python, Building An Engine In Python and further Serializing A Model In Python section.
The uff conversion happens properly but throws: Segmentation fault (core dumped)
when parsing uff here is the code snippet-
import tensorrt as trt

TRT_LOGGER = trt.Logger(trt.Logger.WARNING)

model_file = "/home/neil/uff/test_.uff"

with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:

        parser.register_input("data", (1, 112, 112))

        parser.register_output("fc1/add_1")

parser.parse(model_file, network)

I tried different version of tf container for conversion to check if there is a compatibility issue in newer version. As suggested by Accelerating Inference in TensorFlow with TensorRT User Guide - NVIDIA Docs the op Maximum is supported for tensorrt. I tried 3.4. TensorFlow Container 18.11-19.01 (TensorFlow 1.12) specifically as the Maximum op is listed in supported ops.

Thanks.

Hi, UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.

Please check the below link for the same.

Thanks!

Hi @NVES,
Will try that and check if it works for my model.

Thanks.

Hi,
For converting from TensorFlow to onnx I used tf2onnx:

(Note: Do remember to use --inputs-as-nchw otherwise by default the model input is as hwc format, which throws an issue while using it in deepstream)

For onnx to tensorrt I tried using the above given link (onnx2trt) but it threw an error:

[2020-12-28 18:00:51 ERROR] Network has dynamic or shape inputs, but no optimization profile has been defined.

The better way to do it using trtexec which provides a good number of tunable parameters for running inference, performance benchmarking and we can save to trt engine also.
Here is the link:
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec#tool-command-line-arguments

Thanks.

Hi @duttaneil16,

You have to use optimization profile.

sample is here.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleDynamicReshape
Thanks!