Cannot convert SSD ONNX model to TensorRT

Description

Hi,

I have encountered some errors when trying to convert ONNX model to TensorRT.
I am using a pretrained SSD Lite MobileNet V2 model that I have retrained.

Firstly, I have converted my saved_model with the following command line:

python -m tf2onnx.convert --saved-model “./saved_model_folder/” --output “./saved_model_folder/output.onnx” --opset 16 --verbose

Then I have compiled trtexec.exe from trtexec solution in TensorRT 8.4.1.5 and run the following command line to convert my ONNX model to TensorRT:

trtexec --onnx=output.onnx --saveEngine=output.trt

I got “Unsupported ONNX data type: UINT8 (2)” error.

So, I successfully convert the input of my ONNX model in FP32 instead of UINT8 with the following code:
image
Then I run again trtexec with the following command line:

trtexec --onnx=output_float32.onnx --saveEngine=output.trt

The previous error was gone but I got another error that I do not understand:

[graphShapeAnalyzer.cpp::nvinfer1::builder::`anonymous-namespace’::ShapeNodeRemover::processCheck::587] Error Code 4: Internal Error ((Unnamed Layer* 43) [LoopOutput]_output: tensor volume exceeds (2^31)-1, dimensions are [2147483647,3])

Do you know how can I bypass this error to convert my ONNX model in TensorRT please?
Thank you.

# Environment

**TensorRT Version**: 8.4.1.5
**GPU Type**: NVIDIA GTX 1060 6GB
**Nvidia Driver Version**: 512.15
**CUDA Version**: 11.2
**CUDNN Version**: 8.1.1
**Operating System + Version**: Windows 10 Pro
**Python Version (if applicable)**: 3.9.13
**TensorFlow Version (if applicable)**: 1.12.0

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi,

Please refer,

Thank you.

Thank you for your quick answer!

According to your code, both ONNX models are valid.

Due to confidentiality issue, I cannot share my models with you, but you can find verbose text file output related to the TensorRT conversion of my UINT8 model (output.onnx) and my FLOAT32 model (output_float32.onnx).

Thank you.

output_float32_onnx_trt_conversion_output.txt (72.3 KB)
output_onnx_trt_conversion_output.txt (12.5 KB)

UINT8 data type is currently not supported by TensorRT.
TensorRT supports the following ONNX data types: DOUBLE, FLOAT32, FLOAT16, INT8, and BOOL.

Thank you for your answer.

Yes, I have seen other topics with UINT8 data issue.
I have also seen that there is a future update regarding this on this topic:

Do you know when it will be available please?

Because of the UINT8 data issue, I tried to modify my model input with FLOAT32 instead of UINT8, but I got the error (you can see the full error in the output_float32_onnx_trt_conversion_output.txt file)
[graphShapeAnalyzer.cpp::nvinfer1::builder::anonymous-namespace’::ShapeNodeRemover::processCheck::587] Error Code 4: Internal Error ((Unnamed Layer* 43) [LoopOutput]_output: tensor volume exceeds (2^31)-1, dimensions are [2147483647,3])`

Do you have an idea how to fix errors for my FLOAT32 model please?

Thank you.

Hi,

Currently, we do not have an idea of the approximate ETA.

As mentioned earlier, Currently, TRT does not support tensors with more than 2^31-1 elements. We do not have a workaround except for modifying the network.

Thank you.

Hi,

I would like to share some updates regarding the conversion of my model to tensorrt.

I modified the overall steps like below:

  1. I Took my saved model and specified its input shape to [1,576,720,3]
    → when converting from onnx to tensorrt: remove the “tensor volume exceeds (2^31)-1” error
  2. I have converted it to onnx with opset 13
    → with opset 15 I had the same error as this post:
    TensorRT Parsing ONNX Model Error - #5 by Gudbach
  3. I have converted its input to FLOAT32 (with the script I shared above)
  4. I have converted NMS layers for TRT with the script below
  5. I have converted this model to trt with the following command line:
    trtexec --onnx=ssd_lite_mobilenet_v2_input_shape_ops_13_float32_BatchedNMSDynamic_TRT.onnx --saveEngine=output.trt

The conversion of the ONNX model in TRT failed with the following error:

Assertion failed: nbInputs == 2
C:\_src\plugin\batchedNMSPlugin\batchedNMSPlugin.cpp:189
Aborting...

I also tried to modify “BatchedNMSDynamic_TRT” to “BatchedNMS_TRT” in step 4, and I got a similar error after trying to convert my ONNX model in TRT:

Assertion failed: nbInputDims == 2
C:\_src\plugin\batchedNMSPlugin\batchedNMSPlugin.cpp:151
Aborting...

I attached the script of the step 4 and TRT conversion verbose outputs with “BatchedNMS_TRT” and “BatchedNMSDynamic_TRT”:
convert_ssd_onnx_model_with_nms_trt.py (2.8 KB)
ssd_lite_mobilenet_v2_input_shape_ops_13_float32_BatchedNMS_TRT.txt (375.8 KB)
ssd_lite_mobilenet_v2_input_shape_ops_13_float32_BatchedNMSDynamic_TRT.txt (375.9 KB)

Do you have an idea how to fix this nbInput error please?

Thank you.

Hi,

Could you please share with us the latest ONNX model to try from our end for better debugging.
Also please refer to the following, which may help you.

Thank you.

Hi,

I sent you a private message with my models.

Thank you for your help.

Hi,

We suspect that issue is due to the plugin node expecting two inputs (boxes and scores ) but got something else.

[09/27/2022-06:00:35] [V] [TRT] onnx_graphsurgeon_node_0 [BatchedNMSDynamic_TRT] inputs: [Unsqueeze__759:0 → (1, 1, 2517)[FLOAT]],

Seems like there is only one input. We are checking on more details.

Thank you.

Hi @gael_lagarde,

Could you please share ssd_lite_mobilenet_v2_input_shape_ops_13_float32.onnx?
The graphsurgeon script specifies the 2 required inputs, however the processed model has only one.

self.layer(op="BatchedNMSDynamic_TRT", attrs=attrs, # nbInput == 2 error in tensorrt conversion
                      inputs=[boxes_input, scores_input],
                      outputs=[nms_output])

Thank you.

Hi,

Thank you for your answers.
Actually, the “ssd_lite_mobilenet_v2_input_shape_ops_13_float32.onnx” model corresponds to the “test_nvidia_ops_13_input_shape_float32_BatchedNMSDynamic_TRT.onnx”, I just renamed it before shared it with you.

I did not try to modify inputs number on my own, is input number modified by ONNX conversion or somewhere else in scripts that I shared with you?

Thank you.

Hi,

Could you please check and confirm again.
The above model has the Plugin nodes inserted by ONNX-graphsurgeon. We are looking for the source SSD model without the plugin nodes.

Based on the graphsurgeon script, we can see that ssd_lite_mobilenet_v2_input_shape_ops_13_float32_BatchedNMSDynamic_TRT.onnx is the output model (which you likely renamed). We need the input model below:

input_model_path = r".\\ssd_lite_mobilenet_v2_input_shape_ops_13_float32.onnx"
output_model_path = r".\\ssd_lite_mobilenet_v2_input_shape_ops_13_float32_BatchedNMSDynamic_TRT.onnx"

Thank you.

Hi,

I sent you models in private message.

Thank you again for your help.

Hi @spolisetty ,

Do you have some updates regarding this issue please?

Thank you again.