Tensorrt optimized pb can not be deployed by tf-serving

I am using tensorrt to accelerate inference speed of Tacotron2 model. I used

tensorrt 5.0.2.6 version and tensorflow 1.13.0.rc0 version.

I convert savedmodel to tftrt savedmodel using the tensorrt api below:

trt.create_inference_graph(
input_graph_def=None,
outputs=None,
max_batch_size=32,
input_saved_model_dir=os.path.join(args.export_dir, args.version),
output_saved_model_dir=args.output_saved_model_dir,
precision_mode=args.precision_mode)

The outputed tensorrt_savedmodel.pb can not be imported into tensorboard for view and tensorrt_savedmodel.pb can deployed with tf-serving.

However, when client request the tf-serving using grpc there is an error:

<_Rendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "The TF function for the TRT segment could not be empty
 [[{{node model/inference/prenet/TRTEngineOp_33}}]]"
debug_error_string = " 
{"created":"@1572417319.714936208","description":"Error received from peer ipv4:192.168.23.17:8500","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"The TF function for the TRT segment could not be empty\n\t [[{{node model/inference/prenet/TRTEngineOp_33}}]]","grpc_status":3}

can you help to explain this?

Could you please let us know if you are still facing this issue?

Thanks

Hi , I am facing a similar issue any updates on this?