ERROR: builtin_op_importers.cpp:2179 In function importPad


I’m converting a model from Darknet->Pytorch->ONNX->TensorRT. I can inferring the ONNX model correctly now, but I’ll got some error when I use trtexec to convert the model.
The error message is like below:

ERROR: builtin_op_importers.cpp:2179 In function importPad:
[8] Assertion failed:


TensorRT Version : 7.1.3
GPU Type : Jetson NX iGPU
Nvidia Driver Version :
CUDA Version : 10.2
CUDNN Version : 8
Operating System + Version : Ubuntu 18.04

Relevant Files

Steps To Reproduce

sudo ./trtexec --onnx=test.onnx --explicitBatch

Hi @jackgao0323,

Kindly provide the access to the model.

Ok, I have given you the access.

Hi @jackgao0323,
I could reproduce the issue from your model.
But I am afraid, TRT currently does not support convolutions where the weights are tensors.


Hi @AakankshaS
The error is from padding not from convolutions isn’t it? How could I solve this problem?


Can you plesae try onnx_graphsurgeon’s fold_constants functionality. This might be able to resolve this issue

When I use pytorch to export onnx model, I have already chosen the fold constants. Should I use onnx graphsurgeon fold constants again?

Hi @jackgao0323,
Apologies for late response.
You might be able to resolve this by setting pads to be a constant 1d tensor manually using onnx-graphsurgeon


Hi @AakankshaS,

I’ve got the answer in

Thanks for answering.

1 Like