Build TRT engine with onnx QAT model throws segmentation fault

Description

Hello

I’m trying to build onnx model with quantization aware training following this:

it was successful until exporting onnx model from pytorch. The onnx model passed onnx.checker.check_model() as well.

But building tensorrt engine failed with segmentation fault

trtexec --onnx=model.onnx --workspace=3000 --int8 --verbose
[08/10/2021-18:38:51] [V] [TRT] Parsing node: DequantizeLinear_276 [DequantizeLinear]
[08/10/2021-18:38:51] [V] [TRT] Searching for input: 318
[08/10/2021-18:38:51] [V] [TRT] Searching for input: 319
[08/10/2021-18:38:51] [V] [TRT] Searching for input: 320
[08/10/2021-18:38:51] [V] [TRT] DequantizeLinear_276 [DequantizeLinear] inputs: [318 -> (192, 128, 3, 3)[FLOAT]], [319 -> ()[FLOAT]], [320 -> ()[INT8]], 
[08/10/2021-18:38:51] [V] [TRT] Registering tensor: 321 for ONNX tensor: 321
[08/10/2021-18:38:51] [V] [TRT] DequantizeLinear_276 [DequantizeLinear] outputs: [321 -> (192, 128, 3, 3)[FLOAT]], 
[08/10/2021-18:38:51] [V] [TRT] Parsing node: ConvTranspose_277 [ConvTranspose]
[08/10/2021-18:38:51] [V] [TRT] Searching for input: 315
[08/10/2021-18:38:51] [V] [TRT] Searching for input: 321
[08/10/2021-18:38:51] [V] [TRT] Searching for input: body.11.bias
[08/10/2021-18:38:51] [V] [TRT] ConvTranspose_277 [ConvTranspose] inputs: [315 -> (1, 192, 180, 320)[FLOAT]], [321 -> (192, 128, 3, 3)[FLOAT]], [body.11.bias -> (128)[FLOAT]], 
[08/10/2021-18:38:51] [V] [TRT] Convolution input dimensions: (1, 192, 180, 320)
Segmentation fault (core dumped)

here is my onnx file :
https://drive.google.com/file/d/1gaUvxaFLcb2cinXcM6CW3pNbN9KOeI4W/view?usp=sharing

Thanks.

Environment

TensorRT Version: 8.0.1.6
GPU Type: NVIDIA 2080 ti
Nvidia Driver Version: 465.19.01
CUDA Version: 11.3
CUDNN Version: 8.2
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): 3.8
TensorFlow Version (if applicable): -
PyTorch Version (if applicable): 1.8.1
Baremetal or Container (if container which image + tag): baremetal

Hi @k961118,

We could reproduce the same error. Are you able to successfully run the inference using onnx-runtime ?

Hi @spolisetty

No, running inference with onnx-runtime failed while creating inference session

import onnx
import onnxruntime

model = onnx.load("model.onnx")
onnx.checker.check_model(model)

ort_session = onnxruntime.InferenceSession("model.onnx")
Segmentation fault (core dumped)

I found this issue and i think it is related

1 Like

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!