Convert MaskRCNN to TensorRT

Hi,
I’m trying to convert MaskRCNN to onnx to TensorRT from pytorch. The onnx model can running successfully, but the onnx model can’t convert to TensorRT . And the error such as: Failed parsing .onnx file!
In node 166 (parseGraph): INVALID_NODE: Invalid Node - Pad_166
[shuffleNode.cpp::symbolicExecute::390] Error Code 4: Internal Error (Reshape_156: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])

Is there anyone know how to fix it, Or a correct example to convert MaskRCNN to TensorRT from pytorch.
thanks a lot.

Hi,

Please check if the below comment can also fix your issue:

Thanks.

thanks for your reply.
I tried, but it doesn’t work for my issue. The error still exist.

Hi,

Could you attach the PyTorch, ONNX model, and the source for converting with us?
So we can give it a check?

Thanks.

I used this( deep-learning-for-image-processing/pytorch_object_detection/mask_rcnn at master · WZMIAOMIAO/deep-learning-for-image-processing · GitHub) maskRCNN model to convort to onnx and tensorRT. And the platform is Jetson AGX Orin Developer Kit.

Hi,

Would you mind sharing the ONNX model with us so we can check it directly?
Thanks.

This is the onnx model (GitHub - AndrewYi99/upload_files). When I convert onnx to tensorRT, the running error is such as:

Hi,

Just check your model with TensorRT 8.4 (JetPack5.0.2).

The node has a string data type which is not supported by TensorRT:
https://docs.nvidia.com/deeplearning/tensorrt/operators/index.html#layers-precision-matrix

input: "onnx::Pad_540"
input: ""
output: "onnx::Unsqueeze_541"
name: "Pad_166"
op_type: "Pad"
attribute {
  name: "mode"
  s: "constant"
  type: STRING
}

Would you mind setting do_constant_folding=True when converting the ONNX model?

torch.onnx.export(..., do_constant_folding=True)

Thanks.

Hi,
The onnx model I offered to you had already be set as do_constant_folding=True. Such as:
export onnx

Hi,

Based on the document below:
https://github.com/onnx/onnx-tensorrt/blob/8.4-GA/docs/operators.md

TensorRT 8.4 only supports “FP32, FP16, INT8, INT32” input type in the Pad layer.
But your layer uses the “STRING” type.

Is this layer essential for your model?
Or it can be removed?

Thanks.

could you tell how can I location the “STRING” type.

thanks

Hi,

When running TensorRT with --verbose, you can find this information:

...
input: "onnx::Pad_540"
input: ""
output: "onnx::Unsqueeze_541"
name: "Pad_166"
op_type: "Pad"
attribute {
  name: "mode"
  s: "constant"
  type: STRING
}
...

Thanks.

You solved this problem? actually I have the same problem.

Not yet. I’m trying to use the tensorrt plugin to fix it , but stuck in the phase of building tensorrt sorce.

same on this side, I tried polygraphy surgeon sanitize without results

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.