Description
Hello
I’m trying to build onnx model with quantization aware training following this:
it was successful until exporting onnx model from pytorch. The onnx model passed onnx.checker.check_model() as well.
But building tensorrt engine failed with segmentation fault
trtexec --onnx=model.onnx --workspace=3000 --int8 --verbose
[08/10/2021-18:38:51] [V] [TRT] Parsing node: DequantizeLinear_276 [DequantizeLinear]
[08/10/2021-18:38:51] [V] [TRT] Searching for input: 318
[08/10/2021-18:38:51] [V] [TRT] Searching for input: 319
[08/10/2021-18:38:51] [V] [TRT] Searching for input: 320
[08/10/2021-18:38:51] [V] [TRT] DequantizeLinear_276 [DequantizeLinear] inputs: [318 -> (192, 128, 3, 3)[FLOAT]], [319 -> ()[FLOAT]], [320 -> ()[INT8]],
[08/10/2021-18:38:51] [V] [TRT] Registering tensor: 321 for ONNX tensor: 321
[08/10/2021-18:38:51] [V] [TRT] DequantizeLinear_276 [DequantizeLinear] outputs: [321 -> (192, 128, 3, 3)[FLOAT]],
[08/10/2021-18:38:51] [V] [TRT] Parsing node: ConvTranspose_277 [ConvTranspose]
[08/10/2021-18:38:51] [V] [TRT] Searching for input: 315
[08/10/2021-18:38:51] [V] [TRT] Searching for input: 321
[08/10/2021-18:38:51] [V] [TRT] Searching for input: body.11.bias
[08/10/2021-18:38:51] [V] [TRT] ConvTranspose_277 [ConvTranspose] inputs: [315 -> (1, 192, 180, 320)[FLOAT]], [321 -> (192, 128, 3, 3)[FLOAT]], [body.11.bias -> (128)[FLOAT]],
[08/10/2021-18:38:51] [V] [TRT] Convolution input dimensions: (1, 192, 180, 320)
Segmentation fault (core dumped)
here is my onnx file :
https://drive.google.com/file/d/1gaUvxaFLcb2cinXcM6CW3pNbN9KOeI4W/view?usp=sharing
Thanks.
Environment
TensorRT Version : 8.0.1.6
GPU Type : NVIDIA 2080 ti
Nvidia Driver Version : 465.19.01
CUDA Version : 11.3
CUDNN Version : 8.2
Operating System + Version : Ubuntu 20.04
Python Version (if applicable) : 3.8
TensorFlow Version (if applicable) : -
PyTorch Version (if applicable) : 1.8.1
Baremetal or Container (if container which image + tag) : baremetal
Hi @k961118 ,
We could reproduce the same error. Are you able to successfully run the inference using onnx-runtime ?
Hi @spolisetty
No, running inference with onnx-runtime failed while creating inference session
import onnx
import onnxruntime
model = onnx.load("model.onnx")
onnx.checker.check_model(model)
ort_session = onnxruntime.InferenceSession("model.onnx")
Segmentation fault (core dumped)
I found this issue and i think it is related
opened 12:04PM - 04 Jun 21 UTC
closed 05:52AM - 10 Sep 21 UTC
Topic: QAT
triaged
Release: 8.x
## Description
I used pytorch quantization to quantize the pytorch model and … export to onnx with op-set 13.
This is onnx model's convtranspose op:
<img width="456" alt="WX20210604-195558@2x" src="https://user-images.githubusercontent.com/9389127/120798060-63b85a80-c56f-11eb-84de-8fc62b95095b.png">
When using Tensorrt8 to load the onnx, the parse failed with these. Seems conv is parsed without problem, but convtranspose will get:
```bash
[06/04/2021-11:25:40] [I] [TRT] ----------------------------------------------------------------
[06/04/2021-11:25:40] [E] [TRT] ModelImporter.cpp:738: While parsing node number 737 [ConvTranspose -> "1464"]:
[06/04/2021-11:25:40] [E] [TRT] ModelImporter.cpp:739: --- Begin node ---
[06/04/2021-11:25:40] [E] [TRT] ModelImporter.cpp:740: input: "1458"
input: "1463"
output: "1464"
name: "ConvTranspose_737"
op_type: "ConvTranspose"
attribute {
name: "dilations"
ints: 1
ints: 1
type: INTS
}
attribute {
name: "group"
i: 1
type: INT
}
attribute {
name: "kernel_shape"
ints: 4
ints: 4
type: INTS
}
attribute {
name: "pads"
ints: 1
ints: 1
ints: 1
ints: 1
type: INTS
}
attribute {
name: "strides"
ints: 2
ints: 2
type: INTS
}
[06/04/2021-11:25:40] [E] [TRT] ModelImporter.cpp:741: --- End node ---
[06/04/2021-11:25:40] [E] [TRT] ModelImporter.cpp:744: ERROR: onnx2trt_utils.cpp:2044 In function convDeconvMultiInput:
[6] Assertion failed: (nChannel == -1 || C * ngroup == nChannel) && "The attribute group and the kernel shape misalign with the channel size of the input tensor. "
[06/04/2021-11:25:40] [E] Failed to parse onnx file
[06/04/2021-11:25:40] [E] Parsing model failed
[06/04/2021-11:25:40] [E] Engine creation failed
[06/04/2021-11:25:40] [E] Engine set up failed
```
Want to know the where the problem is, thanks!
## Environment
**TensorRT Version**: 8.0.0 EA
**NVIDIA GPU**: 2080TI
**NVIDIA Driver Version**: 450
**CUDA Version**: 11.0
**CUDNN Version**: 8.0.5
**Operating System**:
**Python Version (if applicable)**:
**Tensorflow Version (if applicable)**:
**PyTorch Version (if applicable)**:
**Baremetal or Container (if so, version)**:
1 Like
NVES
August 12, 2021, 5:29am
4
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!