Driver error when upgrade TensorRT 6 to TensorRT 7 using YoloV4 in INT8 mode

I can using my code on TensorRT6 in INT8 and TensorRT 7 in FP32, but I getting the error like the following image when I use TensorRT 7 in INT 8.
image

This is the verbose output.

image

Environment

TensorRT Version : 7.1.3
GPU Type : Jetson AGX iGPU
Nvidia Driver Version :
CUDA Version : 10.2
CUDNN Version : 8
Operating System + Version : Ubuntu 18.04

Hi,

Here is a similar issue and it turns out to be an environment issue:

Would you mind to check if the suggestion helps for your use case also?
If the issue goes on, please share the detail steps with us for reproducing.

Thanks.

Hi~!

we’ve tried the way but still failed to solve the problem on Xavier NX. How could we diagnose the issue? We will attach our steps later. Thanks.

Ray

Hi @AastaLLL
I’m Ray’s coworker. I’m trying another way to solve this problem but I get other errors now. You can follow this post.

ERROR: builtin_op_importers.cpp:2179 In function importPad inputs.at(1).is_weights()

Thanks!

Hi, both

Have you solved this issue with the new approach?
Thanks

No, still have same problem.

Hi,

Would you mind sharing the detailed steps so we can reproduce this in our environment?
Thanks.

You can follow this reply. I have provided model and step in the following link.

Thanks

Hi,

We try to reproduce this issue in our environment.
But cannot download the onnx file shared in this comment.

Could you make the model public so we can download it?

Thanks.

Hi,

I have sent you the download link by message.

Thanks.

Thanks. We get the model successfully.
Will update here for any progress.

Hi,

We can reproduce this issue in our environment.
Based on below topic, this issue can be avoided by setting the opset version into 9.

Could you check if it helps first?
Thanks.

Hi,
Apologies for the late reply. I’ve been busy last week.
I have tried it but still got errors like the following picture.

Thanks.

Hi,

The root cause is from the Pad_51 layer.

Please noted that the parameters of padding layer need to be a pre-defined constant rather than a tensor input.
Based on your model, the padding parameter is defined as tensor with some runtime calculation although the value is a constant.

A workaround for this is to calculate the tensor 226 value and replace it with a constant input.
This can be achieved by using our ONNX graphsurgeon API:

Thanks.

Hi,

Thanks for your reply. I’ll give it a try this week and tell you the result.

Thanks.