Description
I previously converted a caffe open pose model to tensflow via mmdnn (GitHub - microsoft/MMdnn: MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.).
I can successfully run inference with the saved model with correct output in tensorflow.
I then converted the tensorflow model with trt.TrtGraphConverterV2 for fp16 with segment size of 3.
There were no errors with conversion or loading the outputted model.
When I try to run inference with the same input image as the non-trt version, I receive an error:
“can’t fuse pad and convolution with caffe pad mode”.
If I change the segment size to 1 I get a similar error:
“W tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:38] DefaultLogger Can’t fuse pad and convolution with same pad mode”.
Input tensor I’m using is 1,368,368,3. The model itself has the input layer setup as dynamic ?,16,16
Does tensorrt need the input/output size to be fixed?
I’m new to tensflow and tensort (took the tensorrt course from Nvidia).
My understanding is padding=same means to use the same input size as the last layer output size.
It looks like padding might not be supported from my searches, does it mean I have to hard code values
for each layer? If so, what’s the easiest way to go about doing that?
Thanks for any help possible!
Environment
TensorRT Version: 7.0.0.11
GPU Type: Tesla V100
Nvidia Driver Version: 440.64.00
CUDA Version: 10.2.89
CUDNN Version: (I can’t find it in docker container. I’m using tensorrt that is compiled into tensflow)
Operating System + Version: Ubuntu 18.04.4 LTS
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 2.1.0
Baremetal or Container (if container which image + tag): docker (nvcr.io/nvidia/tensorflow 20.02-tf2-py3)