Description
Getting uff parsing errors while converting from tf to tensort. I tried conversion using the scripts provided by :
Environment
TensorRT Version:
GPU Type: RTX 2080
Nvidia Driver Version: 440.33.01
CUDA Version:10.2
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorflow:20.03-tf1-py3, nvcr.io/nvidia/tensorflow:19.01-py3
Hi,
I have a custom tensorflow model which I want to convert to tensorrt plan for obvious reasons but am facing issues. I cloned the NVIDIA-AI-IOT/tf_to_trt_image_classification
repo in the above-given containers and used the scripts/convert_plan.py
but it throws the following errors-
UFFParser: Parsing Maximum[Op: Binary]. Inputs: Maximum/x, bn0/add_1
UffParser: Parser error: Maximum: Unsupported binary op max with constant right
Failed to parse UFF
I tried using tensorrt python API for conversion under the section Importing From TensorFlow Using Python, Building An Engine In Python and further Serializing A Model In Python section.
The uff conversion happens properly but throws: Segmentation fault (core dumped)
when parsing uff here is the code snippet-
import tensorrt as trt
TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
model_file = "/home/neil/uff/test_.uff"
with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
parser.register_input("data", (1, 112, 112))
parser.register_output("fc1/add_1")
parser.parse(model_file, network)
I tried different version of tf container for conversion to check if there is a compatibility issue in newer version. As suggested by Accelerating Inference in TensorFlow with TensorRT User Guide - NVIDIA Docs the op Maximum is supported for tensorrt. I tried 3.4. TensorFlow Container 18.11-19.01 (TensorFlow 1.12) specifically as the Maximum op is listed in supported ops.
Thanks.