Issues with ONNX to TensorRT Conversion for the Faster R-CNN Mobilenet V3 Model

Description:

I am currently encountering difficulties when attempting to convert the Faster R-CNN Mobilenet V3 model from PyTorch to ONNX and subsequently to TensorRT. The PyTorch model I am using is the pretrained ‘fasterrcnn_mobilenet_v3_large_320_fpn’ model. The goal is to perform object detection in images.

Firstly, I exported the model to ONNX format using the PyTorch ‘torch.onnx.export()’ function. The initial conversion from PyTorch to ONNX was successful, but I am having issues with the next stage: conversion from ONNX to TensorRT.

While trying to convert the model to TensorRT format or while trying to infer with TensorRT provider in ONNX Runtime, I encounter the following error:

[ONNXRuntimeError] : 1 : FAIL : TensorRT input: idxs has no shape specified. Please run shape inference on the onnx model first. Details can be found in https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#shape-inference-for-tensorrt-subgraphs

To address this, I tried to run shape inference on the ONNX model using the onnx.shape_inference.infer_shapes() function. However, the issue persisted even after running shape inference.

I suspect this issue could be due to certain operations in the Faster R-CNN model that may not be fully supported by TensorRT, but I would appreciate any guidance or insights on how to resolve this problem.

I am currently using ONNX 1.10.1, PyTorch 1.11.0, and TensorRT 8.6.1 on a system with CUDA 11.5.

Thank you in advance for your assistance.

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

fasterrcnn_mobilenet_v3_large_320_fpn.onnx (74.2 MB)
output.txt (1.1 MB)

Uploaded files aree the onnx model I use and the log from trtexec, based on pretrained model fasterrcnn_mobilenet_v3_large_320_fpn from Pytorch.

As for the check_model.py, I ran it and nothing shows or any error raised.

Thank you!

Hi,

Sorry for the delayed response
When we tried using the trtexec observed the following error.

[06/27/2023-05:51:03] [E] [TRT] ModelImporter.cpp:774: --- End node ---
[06/27/2023-05:51:03] [E] [TRT] ModelImporter.cpp:777: ERROR: ModelImporter.cpp:195 In function parseGraph:
[6] Invalid Node - If_1047
If_1047_OutputLayer: IIfConditionalOutputLayer inputs must have the same shape. Shapes are [-1] and [-1,1].
[06/27/2023-05:51:03] [E] Failed to parse onnx file
[06/27/2023-05:51:03] [I] Finished parsing network model. Parse time: 0.353755
[06/27/2023-05:51:03] [E] Parsing model failed
[06/27/2023-05:51:03] [E] Failed to create engine from model or file.
[06/27/2023-05:51:03] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8601] # trtexec --onnx=fasterrcnn_mobilenet_v3_large_320_fpn.onnx --verbose --workspace=20000

It looks like there is some problem with the model definition. When dealing with if statement, you have to make sure that both true and false block outputs have the same shape for every output.

Thank you.

Is it currently possible to export the model torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn to tensorRT, or are there any difficulties? If so, how can it be done using the file. trtexec