Pytorch lstm pth->onnx->tensorrt

hello, recently I use the pytorch lstm, it is success to convert the lstm pth to onnx and use the onnx runtime, the results correct, but when I use the trt to infer, the result is zero, could you help me to deal with this problem, thanks.

(1, 1, 3)

(1, 1, 3)

(1, 1, 3)

(1, 1, 3)

(5, 1, 3)

[[[-0.23013705 -0.03982337 -0.11622404]]]

[[[-0.6445751 -0.09410419 -0.27716887]]]

[[[-0.09160091 -0.02479896 -0.04808782]]]

[[[0. 0. 0.]]]

[[[0. 0. 0.]]

[[0. 0. 0.]]

[[0. 0. 0.]]

[[0. 0. 0.]]

[[0. 0. 0.]]]
the output are zeros.

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

@775661382,

We are unable build engine successfully. Looks like you’re using custom plugin, we request you to please share complete resources for issue reproducing. Also meanwhile we recommend you to provide following env details and try on latest TensorRT version 8.0 EA in case you’re using old one.

Note: We need to build the TRT engine on machine which we want to run inference. This is because TensorRT optimizes the graph using the available GPUs, thus engine is platform specific and not potable across different platforms.

Thank you.

@775661382,

As recommended in previous reply, have you got a chance to try on latest TensorRT version 8.0. We tried running script you’ve shared, but looks like infer_lstm.py is incomplete, code is not present to call inference and sample image you’re using for inference.

Please try on latest TensorRT version and if you still face the issue, please share correct inference script with sample image for better debugging.

Thank you.