Fail in convertion maskrcnn onnx model to TensorRT - Pad string


When I try to convert my model from onnx to tensorrt format I get the same error, I even tried to apply polygraphy but I did not get good results. To make the conversion from pytorch to onnx I used the opset version number 16:

[02/11/2023-15:24:59] [E] [TRT] ModelImporter.cpp:775: input:“/transform/Gather_output_0”
input: “/transform/Cast_17_output_0”
input: “”
output: “/transform/Pad_output_0”
name: “/transform/Pad”
op_type: “Pad”
attribute {
name: “mode”
s: “constant”
type: STRING


TensorRT Version:
GPU Type: T4
Nvidia Driver Version: 510.47.03
CUDA Version: 11.6
CUDNN Version: 8.4
Operating System + Version: Ubuntu 20.04.5
PyTorch Version: 1.13.1

Relevant Files

model in onnx formart: model.onnx

after applying polygraphy: model_san.onnx

Steps To Reproduce

trtexec --onnx=model_san.onnx --saveEngine=model.plan --fp16 --workspace=7000 --buildOnly --verbose



We recommend that you try the most recent TensorRT version 8.5.3.
I hope the following similar posts will help you.

Thank you.

I tried with TensorRT version 8.5.3, now the error is as follows:

input: "/rpn/Reshape_26_output_0"
output: "/rpn/TopK_output_0"
output: "/rpn/TopK_output_1"
name: "/rpn/TopK"
op_type: "TopK"
attribute {
  name: "axis"
  i: 1
  type: INT
attribute {
  name: "largest"
  i: 1
  type: INT
attribute {
  name: "sorted"
  i: 1
  type: INT

any ideas or workaround you can suggest? this is with the model after applying polygraphy, thanks a lot!


We could reproduce a different error on 8.5.3 version.
Please allow us some time to work on this issue.

Thank you.

I’ll be waiting, thanks a lot!

Hi @isaac21 ,
Pytorch MaskRCNN has known issues when importing into TensorRT.
We are working on addressing this issue, and shall be available in future release.
Please stay tuned.


I have seen that this problem has been going on for a while. Any possible release date?

Hi @isaac21 , apologies for the inconvenience caused.

The team is already working on the fixes, and we are targeting to provide a fix in upcoming releases.

Thank you.