Tensorrt INT8 model deployed in TF-serving error

TensorRT Version: 5.1.5.0
GPU Type: Tesla T4
Nvidia Driver Version: 418.87.01
CUDA Version: 10.1.243
CUDNN Version: 7.6.3
Python Version (if applicable): 3.7.4
TensorFlow Version (if applicable):1.14.1

Operating System + Version: Ubuntu 16.04.6 LTS (GNU/Linux 4.4.0-142-generic x86_64)

external/org_tensorflow/tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:632] Building a new TensorRT engine for TRTEngineOp_0 input shapes: [[1,168,512]]
external/org_tensorflow/tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:37] DefaultLogger Tensor DataType is determined at build time for tensors not marked as input or output.
external/org_tensorflow/tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:37] DefaultLogger Tensor DataType is determined at build time for tensors not marked as input or output.
external/org_tensorflow/tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:37] DefaultLogger Tensor DataType is determined at build time for tensors not marked as input or output.
external/org_tensorflow/tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:37] DefaultLogger Tensor DataType is determined at build time for tensors not marked as input or output.
external/org_tensorflow/tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:37] DefaultLogger Tensor DataType is determined at build time for tensors not marked as input or output.
external/org_tensorflow/tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:37] DefaultLogger Tensor DataType is determined at build time for tensors not marked as input or output.
external/org_tensorflow/tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc:1467] Quantization range was not found for Tacotron_model/inference/encoder_convolutions/conv_layer_1_encoder_convolutions/conv1d/conv1d. This is okay if TensorRT does not need the range (e.g. due to node fusion).
external/org_tensorflow/tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc:1467] Quantization range was not found for (Unnamed ITensor* 6). This is okay if TensorRT does not need the range (e.g. due to node fusion).
external/org_tensorflow/tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc:1467] Quantization range was not found for Tacotron_model/inference/encoder_convolutions/conv_layer_1_encoder_convolutions/conv1d/conv1d/ExpandDims. This

Hi,

Can you share your code/script and model file so I can try to reproduce this or further debug?

Thanks

Hi,
Can you share your code/script and model so I can try to reproduce this or further debug?

Meanwhile, could you please try to use “trtexec” command to test the model. 
“trtexec” useful for benchmarking networks and would be faster and easier to debug the issue.

Thanks