TF-TRT TrtGraphConverterV2 converter build failure

Description

While trying to build a TRT engine file using the TF-TRT TrtGraphConverterV2 converter I’m getting the following error:

2022-04-25 10:53:36.106693: I tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:436] TRTEngineOp not using explicit QDQ
2022-04-25 10:53:36.107070: W tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:833] TF-TRT Warning: Running native segment forTRTEngineOp_0_0 due to failure in verifying input shapes: Input shapes do not match input partial shapes stored in graph, for TRTEngineOp_0_0: [[28,28,1]] != [[?,28,28,1]]
2022-04-25 10:53:36.111145: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at transpose_op.cc:142 : INVALID_ARGUMENT: transpose expects a vector of size 3. But input(1) is a vector of size 4
2022-04-25 10:53:36.111176: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at trt_engine_op.cc:618 : INVALID_ARGUMENT: {{function_node TRTEngineOp_0_0_native_segment}} transpose expects a vector of size 3. But input(1) is a vector of size 4
[[{{node StatefulPartitionedCall/modelSvg/conv2d_1/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer}}]]

Environment

TensorRT Version: 7.2.3.4
GPU Type: GeForce RTX 3090
Nvidia Driver Version: 470.10.01
CUDA Version: 11.4
CUDNN Version: 8.2.4
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.9.12
TensorFlow Version (if applicable): (gpu) 2.8.0
PyTorch Version (if applicable): NA
Baremetal or Container (if container which image + tag): Baremetal

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Tf2TRT.zip (1.9 MB)

TF2TRT.zip includes:

  • TF_2_TRT_Example.py python script file to reproduce the problem

  • modelSvg\saved_model directory

  • TF-TRT-error.txt - Reported error

Please advise why I’m getting input size error which says that it 4 while based on mu understanding I declared 3, refer to this line in code:

inp1 = np.random.normal(size=(28, 28, 1)).astype(np.float32)

Thanks,

Hi,
We recommend you to check the below samples links in case of tf-trt integration issues.

If issue persist, We recommend you to reach out to Tensorflow forum.
Thanks!

Hello,
Thanks for your quick response.

My script is based on this sample:
worflow-with-savedmodel
Which is one of your recommended samples above.

I just changed the loaded model, inputs count and inputs size, the rest is the same.

Is TF-TRT support provided by the TF development teams?
Isn’t provided by the NVIDIA TRT development teams?

Thanks,

Hi,

We provide initial support, but the above issue looks like more related to Tensorflow model. Please reach out to the Tensorflow forum to get better help.

Thank you.