Gather node output wrong when converting with TensorRT

Hello, I am having the same issue, @spolisetty any updates on that ?

Hi,

We are tracking this issue internally. This may take time for us to fix. If any bug is fixed it will be available in future releases.
Meanwhile, if there is a workaround we will update you.

Thank you.

@spolisetty
Good day, any updates?
I encountered the exact same issue. Also with a torvision maskrcnn based model. The onnx-model works fine in onnxruntime and produces the same output as in pure pytorch. Since the onnx-model is fine, the issue must be in the tensorrt onnx-parser. netron says input size is 1000, but tensorrt apparently calculates the wrong sizes somehow using one of the sizes coming out of flatten.
I have encountered issues with flatten in tensorrt before. So perhaps that op is worth looking into?


reshape_graph

@spolisetty Hello! It appears that I have just run into this same issue when attempting to convert an ONNX version of PyTorch’s Keypoint RCNN model (torchvision.models.detection.keypointrcnn_resnet50_fpn) to TensorRT. Here’s the error message I got back when I tried to run trtexec:

[01/10/2023-00:24:36] [E] Error[4]: [graphShapeAnalyzer.cpp::analyzeShapes::1285] Error Code 4: Miscellaneous (IShuffleLayer Reshape_1448: reshape changes volume. Reshaping [1226077776] to [1,4756].)
[01/10/2023-00:24:36] [E] [TRT] parsers/onnx/ModelImporter.cpp:791: While parsing node number 357 [Reshape -> "onnx::Sigmoid_2725"]:
[01/10/2023-00:24:36] [E] [TRT] parsers/onnx/ModelImporter.cpp:792: --- Begin node ---
[01/10/2023-00:24:36] [E] [TRT] parsers/onnx/ModelImporter.cpp:793: input: "onnx::Reshape_2723"
input: "onnx::Reshape_2724"
output: "onnx::Sigmoid_2725"
name: "Reshape_1448"
op_type: "Reshape"
attribute {
  name: "allowzero"
  i: 0
  type: INT
}

Are there any additional updates on this issue?