Int8 inference driver error

I modify the sample code and run inference with our model in FP32 correctly.
But I tried to run the application in ./test --int8, it gives error below:

Input filename: …/samples/sampleSEGNET/Model.onnx
ONNX IR version: 0.0.6
Opset version: 9
Producer name: pytorch
Producer version: 1.7
Model version: 0
Doc string:
[02/02/2021-12:27:31] [W] [TRT] Calibrator is not being used. Users must provide dynamic range for all tensors that are not Int32.
[02/02/2021-12:27:31] [I] [TRT]
[02/02/2021-12:27:31] [I] [TRT] --------------- Layers running on DLA:
[02/02/2021-12:27:31] [I] [TRT]
[02/02/2021-12:27:31] [I] [TRT] --------------- Layers running on GPU:
[02/02/2021-12:27:31] [I] [TRT] (Unnamed Layer* 0) [Constant] + (Unnamed Layer* 1) [Shuffle] + Add_1, Conv_2 + Relu_3, Conv_4 + Relu_5, Conv_6, Conv_7 + Relu_8, Add_9, Conv_50 + Relu_51 || Conv_10 + Relu_11, Conv_12, Conv_13 + Relu_14, Add_15, Conv_16 + Relu_17, Conv_18 + Relu_19, Conv_20 + Relu_21, Add_22, Conv_23 + Relu_24, Add_25, Conv_26 + Relu_27, Conv_28 + Relu_29, Add_30…

terminate called after throwing an instance of ‘pwgen::PwgenException’
what(): Driver error:


TensorRT Version : 7.1.3
GPU Type : Xavier
Nvidia Driver Version : Package:nvidia-jetpack, Version: 4.4.1-b50
CUDA Version : 10.2.89
CUDNN Version : 8.0.0
Operating System + Version : Ubuntu 18.04
Python Version (if applicable) :
TensorFlow Version (if applicable) :
PyTorch Version (if applicable) :
Baremetal or Container (if container which image + tag) :

Is this problem caused by wrong driver version?
Thank you.


Suppose you install the system and all the libraries from the same JetPack version.
So the driver/OS/libraries should be already aligned.

We will need more information to give a further suggestion.
Could you run the model with --verbose and share the output log with us?

$ /usr/src/tensorrt/bin/trtexec --onnx=[/path/to/Model.onnx] --int8 --verbose


Thank you for reply
Here’s the log file

log.txt (450.7 KB)

Thanks again for your help.


If any further information is needed please let me know.


There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.


This seems to be a CUDA or driver issue.

Could you try if this also happens on the JetPack 4.5 environment?
If yes, could you share the model with us?