ONNX to tensorRT conversion

Hi Team - We were getting below error screenshots when we trying ONNX model to trt conversion. Please guide us how to proceed.

1.We are using Mask2formertransformer decoder in our panoptic segmentation model

  1. We are able to convert all the individual layers to trt but when we run as a whole we are getting error in the 2nd module

MagnaMask2Former

  1. MagnaMask2FormerPixelLevelModule (Torch2ONNX: Success / ONNX2TRT: Success)
  2. Mask2FormerTransformerModule (Torch2ONNX: Success / ONNX2TRT: Failed)
    a. Mask2FormerSinePositionEmbedding (Torch2ONNX: ??? / ONNX2TRT: ???)
    b. Mask2FormerMaskedAttentionDecoder (Torch2ONNX: ??? / ONNX2TRT: ???)
    i. Mask2FormerMaskedAttentionDecoderLayer (Torch2ONNX: ??? / ONNX2TRT: ???)
  3. Mask2FormerAttention (Torch2ONNX: ??? / ONNX2TRT: ???)
  4. MultiheadAttention (Torch2ONNX: ??? / ONNX2TRT: ???)
    ii. Mask2FormerMaskPredictor (Torch2ONNX: ??? / ONNX2TRT: ???)
  5. Mask2FormerMLPPredictionHead (Torch2ONNX: ??? / ONNX2TRT: ???)
    a. Mask2FormerPredictionBlock (Torch2ONNX: ??? / ONNX2TRT: ???)

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi,

We are unable to run the script, could you please share with us the ONNX model.

Thank you.

PFA error logs for onnx_to_trt conversion.
Please look at the error logs and help us to resolve it.

Below is the is the tree structure of mask2former transformer model.

When we run individual layers, we are getting successful in conversion, but when we run entire transformer module we are getting error.

Mask2FormerTransformerModule (Torch2ONNX: Success / ONNX2TRT: Failed)
a.Mask2FormerSinePositionEmbedding (Torch2ONNX: success / ONNX2TRT: success)
b.M ask2FormerMaskedAttentionDecoder (Torch2ONNX: success / ONNX2TRT: Failed)
1.Mask2FormerMaskedAttentionDecoderLayer (Torch2ONNX: success / ONNX2TRT: success)
a. Mask2FormerAttention (Torch2ONNX: success/ ONNX2TRT: success)
b. MultiheadAttention (Torch2ONNX: success / ONNX2TRT: success)
2. Mask2FormerMaskPredictor (Torch2ONNX: success / ONNX2TRT: success)
a. Mask2FormerMLPPredictionHead (Torch2ONNX: Success / ONNX2TRT: Success
b.Mask2FormerPredictionBlock (Torch2ONNX: Success / ONNX2TRT: Success

OnnxTotrt_logs_for_full_transformer_model (230.6 KB)

We have attached verbose logs and could you please guide us what could be the issues for failure?

Hi,

[07/10/2023-13:38:02] [I]
&&&& PASSED TensorRT.trtexec [TensorRT v8601] # trtexec --onnx=transformer_1x3x544x960.onnx --fp16 --verbose --workspace=20000

We are unable to reproduce the error on latest TensorRT version 8.6.1. Please try on the latest TensorRT version.
You can also try https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorrt container for easy setup.

If you still face the same issue, please share with us the details below.

TensorRT Version :
GPU Type :
Nvidia Driver Version :
CUDA Version :
CUDNN Version :
Operating System + Version :
Baremetal or Container (if container which image + tag) :

Thank you.

Thank you so much for the update. We are upgrading our device to TensorRT .6.1 and CUDA 12.0. Will test on this. Will update you soon.

Hi , we were getting the same segmentation fault error while converting onnx to trt.

I have installed TensorRT 8.6.1.6 with CUDA 12.0 , Jetpack 5.1.1 to run the conversion.

PFA jtop and libraries (pip list ) we have in our board.

pip_list (4.8 KB)

Also, it would be great if you share these details so that we replicate the same and try.

TensorRT Version :
GPU Type :
Nvidia Driver Version :
CUDA Version :
CUDNN Version :
Operating System + Version :
Baremetal or Container (if container which image + tag) :

Please guide us how to proceed further.

Hi,

We are moving this post to the Jetsons-related forum to get better help.

Thank you.

Hi,

How do you install TensorRT 8.6 on Orin?
Although there is a CUDA 12 for Jetson, we don’t have a TensorRT 8.6 public release for Jetson yet.

Thanks.

Hi, I have installed TensorRT 8.6.1 through this link.

I did this after i got suggession from below, to try ONNX to TensorRT conversion with latest version of TensorRT.

I am still facing the same error. Please guide me with the correct versions needed for this conversion. It would be great if i get below details used for onnx to trt conversion.

TensorRT Version :
GPU Type :
Nvidia Driver Version :
CUDA Version :
ONNXRuntime:
CUDNN Version :
Operating System + Version :
Baremetal or Container (if container which image + tag)

Hi,

The release page only has the x86_64 and ARM SBSA packages.
No Jetson package is available.

Thanks.

Hi - I am getting below error when i tried running inference with trt file.
Please guide how to proceed on this error.

Error Code 1: Cuda Driver (an illegal memory access was encountered)

illegal_memory_issue_trt (22.3 KB)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.