Could not convert reshape layer in Mobilevit model

Description

Hello,
I’m converting .onnx model to .engine and having a problem "Internal Error (could not find any implementation for node {ForeignNode[491…Reshape_238]}) "
I guess that maybe trt is nor supporting Reshape layer, however, onnx-tensorrt/operators.md at main · onnx/onnx-tensorrt · GitHub indicates that the Reshape layer is supported. Besides, Reshape_238 is not the first reshape layer in this model, I’m wondering why the first Reshape model has the assertion?

I have two questions here:

  1. is Tensorrt 8.2.1.8 support Reshape layer? Since I have read from the official trt website that one can convert GPT-2 model with trt and GPT-2 has transformer in it which should contains the Reshape layer.
  2. If trt 8.2.1.8 is not supporting Reshape layer yet, is there any why to replace the layer with other supported layers? I have lots of reshape operations during vision transformer process when folding and unfolding image patches. e.g. tensor (nhw, c) → tensor(nh, w, c).

the onnx model I use is attached below
mobilevit_deeplabv3plus_simplified.onnx (24.3 MB)

Environment

TensorRT Version: 8.2.1.8
GPU Type: T4
Nvidia Driver Version: 460.73.01
CUDA Version: 10.2
CUDNN Version: 8.0.5
Operating System + Version: ubuntu18.04
Python Version (if applicable): 3.6
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.8.1
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hello,

Then onnx is already provided above. The following is the log when running
./trtexec --onnx=/home/lichenxi/projects/ipm_multitask_trt/data/mobilevit_deeplabv3plus_simplified.onnx --tacticSources=-cublasLt,+cublas --workspace=4096 --saveEngine=mobilevit.engine --verbose

And the full-version log is attached here :
log.txt (759.6 KB)

The reason I use –tacticSources=-cublasLt,+cublas is that I got Error Code 2: Internal Error (Assertion cublasStatus == CUBLAS_STATUS_SUCCESS failed.) . Installing the 2 patches for CUDA10.2 doesn’t help. So for this moment, I use this option to avoid the CUBLAS error.

Looking forward to your reply. Thanks.

Hi,

We recommend you please try on the latest TensorRT version 8.4 EA.
https://developer.nvidia.com/nvidia-tensorrt-8x-download

On the latest TRT version, we could not reproduce the error and build the engine successfully.

&&&& PASSED TensorRT.trtexec [TensorRT v8400] # /usr/src/tensorrt/bin/trtexec --onnx=mobilevit_deeplabv3plus_simplified.onnx --tacticSources=-cublasLt,+cublas --workspace=10000 --saveEngine=mobilevit.engine --verbose

Thank you.

Thank for your reply and I really appreciate your help!

Will try on TensorRT 8.4 later. However, I have a constraint on the version of tensorrt, the platform I use only supports TensorRT8.2.1.8 for this moment. I’m wondering the reason that caused this error, do you have any idea on that? Since I checked the document of onnx-tensorrt and it says the Reshape operate is supported on TensorRT8.2.x.

Thank you.

Hi,

Sorry, on v8.2.1.8 we couldn’t reproduce the error. We could successfully build the TensorRT engine.

[05/17/2022-18:05:31] [I] TensorRT version: 8.2.1
&&&& PASSED TensorRT.trtexec [TensorRT v8201] # trtexec --onnx=mobilevit_deeplabv3plus_simplified.onnx --tacticSources=-cublasLt,+cublas --workspace=4096 --verbose

Could you please make sure, you installed TensorRT and dependencies correctly. Also please try increasing the workspace.

Thank you.

Were you able to resolve the issue following the instructions by sploisetty? Want to check if I can close this issue.
Thanks