I modify the sample code and run inference with our model in FP32 correctly.
But I tried to run the application in ./test --int8, it gives error below:
Input filename: …/samples/sampleSEGNET/Model.onnx
ONNX IR version: 0.0.6
Opset version: 9
Producer name: pytorch
Producer version: 1.7
Domain:
Model version: 0
Doc string:
[02/02/2021-12:27:31] [W] [TRT] Calibrator is not being used. Users must provide dynamic range for all tensors that are not Int32.
[02/02/2021-12:27:31] [I] [TRT]
[02/02/2021-12:27:31] [I] [TRT] --------------- Layers running on DLA:
[02/02/2021-12:27:31] [I] [TRT]
[02/02/2021-12:27:31] [I] [TRT] --------------- Layers running on GPU:
[02/02/2021-12:27:31] [I] [TRT] (Unnamed Layer* 0) [Constant] + (Unnamed Layer* 1) [Shuffle] + Add_1, Conv_2 + Relu_3, Conv_4 + Relu_5, Conv_6, Conv_7 + Relu_8, Add_9, Conv_50 + Relu_51 || Conv_10 + Relu_11, Conv_12, Conv_13 + Relu_14, Add_15, Conv_16 + Relu_17, Conv_18 + Relu_19, Conv_20 + Relu_21, Add_22, Conv_23 + Relu_24, Add_25, Conv_26 + Relu_27, Conv_28 + Relu_29, Add_30…
…
…
terminate called after throwing an instance of ‘pwgen::PwgenException’
what(): Driver error:
Aborted
Environment
TensorRT Version : 7.1.3
GPU Type : Xavier
Nvidia Driver Version : Package:nvidia-jetpack, Version: 4.4.1-b50
CUDA Version : 10.2.89
CUDNN Version : 8.0.0
Operating System + Version : Ubuntu 18.04
Python Version (if applicable) :
TensorFlow Version (if applicable) :
PyTorch Version (if applicable) :
Baremetal or Container (if container which image + tag) :
Is this problem caused by wrong driver version?
Thank you.