Is there any one use TF-TRT to speed-up RNN model(LSTM and others),
I face the problem that the input-shape of NMT and LSTM are not a constant value, so ass I am going to convert
native pb to TensorRT supported pb, I met the problem like:
TensorRT node TRTEngineOp_0 added for segment 0 consisting of 10 nodes failed: Invalid argument: Validation failed for TensorRTInputPH_0 and input slot 0: Input tensor with shape [?,?,161] has an unknown non-batch dimension at dim 1. Fallback to TF…
is their anyone met this?