LSTM Error Tensorrt Jetson Orin

I have an Voice Encoder .onnx model I would like to convert into a .trt engine on my Nvidia Jetson Orin (ARM device) running CUDA 11.4 and Tensorrt 8.5.2.2.
When I use trtexec on the model, the build fails and I get these errors:

[08/01/2024-23:25:39] [W] [TRT] Convolution + generic activation fusion is disable due to incompatible driver or nvrtc

Error[2]: [standardBuilderUtils.cpp::canStride::126] Error Code 2: Internal Error (Assertion !layerImpls.empty() failed. /lstm/LSTM has no RunnerBuilders)

[08/01/2024-23:22:20] [E]
Error[2]: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. ).

Then I went to Google Colab (x86) and ran a T4 instance using CUDA 12.2 and Tensorrt 10.2, and when I ran trtexec, the build was successful and I was able to get a .trt engine and perform inference on it. I want to get a trt engine optimized for the orin, but im stuck and I don’t know what to do.

I have a feeling the problem might be with the model architecture having LSTM, for I was able to successfully convert other .onnx models to .trt engines on the Orin device. Currently, I’m stuck and don’t know what to do, as I need to get a .trt engine optimized for my hardware. Is Tensorrt 8.5.2.2 too outdated to process my model architecture?

I’ve added a neuron visualization of my .onnx model, the PyTorch to onnx conversion script, and the VoiceEncoder model architecture. Any help is greatly appreciated.


torch_to_onnx.pdf (39.6 KB)
VoiceEncoder.pdf (65.1 KB)

Hi @benjaminsoo777 ,
Apologies for the delay, but Orin should be the right platform to get assistance on this issue.
Thanks