Issue with elementwise operation in dynamic batch mode using explicit batch mode

Description

I currently have an ONNX file for object detection. Its based on the Mobilenet-v2 with SSDlite tail. I have linked the file above as a Google drive link. I want to generate the corresponding TensorRT file with dynamic batch support using the explicit batch mode. To achieve this, first I made edits to the ONNX file such that the batch dimension of the inputs and the outputs are -1.

But, upon generating the engine file with this updated ONNX file using the CLI command trtexec, I get the following error:

[08/11/2022-10:28:07] [E] Error[2]: [graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 2: Internal Error (Mul_184: dimensions not compatible for elementwise ) [08/11/2022-10:28:07] [E] Error[2]: [builder.cpp::buildSerializedNetwork::417] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed.) Segmentation fault (core dumped)

The command I enter using trtexec is:
/usr/src/tensorrt/bin/trtexec --onnx=mobilenet-v2-ssdlite.onnx --saveEngine=mobilenet-v2-ssdlite.trt --fp16 --inputIOFormats=fp32:chw --outputIOFormats=fp32:chw --workspace=4096 --minShapes=input:1x3x300x300 --maxShapes=input:10x3x300x300 --optShapes=input:10x3x300x300 --explicitBatch

Can you please guide on the next steps on fixing this. Thanks!

Environment

TensorRT Version: 8.0.1.6
GPU Type: GTX 1660Ti
Nvidia Driver Version: 470.141.03
CUDA Version: 11.4
CUDNN Version: 10.4.0
Operating System + Version: Ubuntu 18.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Google Drive link: mobilenet-v2-ssdlite.onnx - Google Drive

Steps To Reproduce

/usr/src/tensorrt/bin/trtexec --onnx=mobilenet-v2-ssdlite.onnx --saveEngine=mobilenet-v2-ssdlite.trt --fp16 --inputIOFormats=fp32:chw --outputIOFormats=fp32:chw --workspace=4096 --minShapes=input:1x3x300x300 --maxShapes=input:10x3x300x300 --optShapes=input:10x3x300x300 --explicitBatch

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

I did all the above steps. The ONNX model is valid.

The verbose logs are stored in this file below, this website didn’t allow me to post this comment due to crossing the character limit.

Hi,

If possible could you please share with us the script used for generating this model.

Thank you.

I was using the trtexec CLI command to generate the TensorRT model. The command is:
/usr/src/tensorrt/bin/trtexec --onnx=mobilenet-v2-ssdlite.onnx --saveEngine=mobilenet-v2-ssdlite.trt --fp16 --inputIOFormats=fp32:chw --outputIOFormats=fp32:chw --workspace=4096 --minShapes=input:1x3x300x300 --maxShapes=input:10x3x300x300 --optShapes=input:10x3x300x300 --explicitBatch

Hi,

I mean could you please share the above information for generating the ONNX model.
We need to make sure, model has dynamic input node correctly.

Thank you.