Hi,
I’m trying to convert MaskRCNN to onnx to TensorRT from pytorch. The onnx model can running successfully, but the onnx model can’t convert to TensorRT . And the error such as: Failed parsing .onnx file!
In node 166 (parseGraph): INVALID_NODE: Invalid Node - Pad_166
[shuffleNode.cpp::symbolicExecute::390] Error Code 4: Internal Error (Reshape_156: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])
Is there anyone know how to fix it, Or a correct example to convert MaskRCNN to TensorRT from pytorch.
thanks a lot.
Hi,
Please check if the below comment can also fix your issue:
Hi,
I think padding related node is causing error, we don’t support 2D shape tensors yet. We can try workaround constant-fold with polygraphy. After this we are able to successfully generate engine. Please try
polygraphy surgeon sanitize --fold-constants grid_sample.onnx -o 2001833/folded.onnx
For more details,
Thank you.
Thanks.
thanks for your reply.
I tried, but it doesn’t work for my issue. The error still exist.
Hi,
Could you attach the PyTorch, ONNX model, and the source for converting with us?
So we can give it a check?
Thanks.
Hi,
Would you mind sharing the ONNX model with us so we can check it directly?
Thanks.
This is the onnx model (GitHub - AndrewYi99/upload_files ). When I convert onnx to tensorRT, the running error is such as:
Hi,
Just check your model with TensorRT 8.4 (JetPack5.0.2).
The node has a string data type which is not supported by TensorRT:
https://docs.nvidia.com/deeplearning/tensorrt/operators/index.html#layers-precision-matrix
input: "onnx::Pad_540"
input: ""
output: "onnx::Unsqueeze_541"
name: "Pad_166"
op_type: "Pad"
attribute {
name: "mode"
s: "constant"
type: STRING
}
Would you mind setting do_constant_folding=True
when converting the ONNX model?
torch.onnx.export(..., do_constant_folding=True)
Thanks.
Hi,
The onnx model I offered to you had already be set as do_constant_folding=True. Such as:
Hi,
Based on the document below:
https://github.com/onnx/onnx-tensorrt/blob/8.4-GA/docs/operators.md
TensorRT 8.4 only supports “FP32, FP16, INT8, INT32” input type in the Pad layer.
But your layer uses the “STRING” type.
Is this layer essential for your model?
Or it can be removed?
Thanks.
could you tell how can I location the “STRING” type.
thanks
Hi,
When running TensorRT with --verbose
, you can find this information:
...
input: "onnx::Pad_540"
input: ""
output: "onnx::Unsqueeze_541"
name: "Pad_166"
op_type: "Pad"
attribute {
name: "mode"
s: "constant"
type: STRING
}
...
Thanks.
isaac21
February 12, 2023, 7:20am
15
You solved this problem? actually I have the same problem.
Not yet. I’m trying to use the tensorrt plugin to fix it , but stuck in the phase of building tensorrt sorce.
isaac21
February 12, 2023, 7:32pm
17
same on this side, I tried polygraphy surgeon sanitize without results
system
Closed
February 26, 2023, 7:32pm
18
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.