Fail to speed up model by tensorrt

Platform: Linux
Problem and terminal output:

layout failed: Invalid argument: The graph is already optimized by layout optimizer. 
 Engine creation for TRTEngineOp_21 failed. The native segment will be used instead. Reason: Invalid argument: Node Tacotron_model/inference/encoder_LSTM/bidirectional_rnn/bw/bw/while/encoder_bw_LSTM/BiasAdd should have an input named 'Tacotron_model/inference/encoder_LSTM/bidirectional_rnn/bw/bw/while/encoder_bw_LSTM/MatMul' but it is not available

I used docker to turn on Tensorrt to speed up my Tacotron-2 saved_model.

docker run --rm --gpus all -it \ -v /tmp:/tmp \ /usr/local/bin/saved_model_cli convert \ --dir 'my_saved_model' \ --output_dir 'my_saved_model_trt' \ --tag_set serve \ tensorrt --precision_mode FP16 --max_batch_size 1 --is_dynamic_op True