Onnx with dynamic batch cannot be parsed

I created an onnx file with dynamic batch:

  dummy_input = torch.randn((10,3,112,112))
  dynamic_axes = {"input.1":{0:"batch_size"}, "1348":{0:"batch_size"}}
  torch_out = torch.onnx.export(eqface_model, dummy_input.to(device), "./pre_checkpoint/eqface_dy.onnx", dynamic_axes=dynamic_axes)

While converting onnx to trt enginel, given me such error:

----------------------------------------------------------------
[W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/23/2021-21:21:16] [07/23/2021-21:21:16] [E] [TRT] (Unnamed Layer* 370) [Shuffle]: at most one dimension may be inferred
ERROR: onnx2trt_utils.cpp:1498 In function scaleHelper:
[8] Assertion failed: dims.nbDims == 4 || dims.nbDims == 5

When I used a fixed batch e.g. 1, there was no problem. Any suggestion for this problem?

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Many thanks for your fast reply, NVES!

  1. check_model.py, no errors prompt, run smoothly.
  2. I tried trtexec, gave me the same error.
/opt/TensorRT/bin/trtexec  --onnx=/opt/rida/models/face_score/eqface_dy.onnx --minShapes=input:1x3x112x112 --optShapes=input:4x3x112x112 --maxShapes=input:8x3x112x112 --shapes=input:5x3x112x112 --verbose >> trt.log
[W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[E] [TRT] (Unnamed Layer* 370) [Shuffle]: at most one dimension may be inferred
ERROR: onnx2trt_utils.cpp:1498 In function scaleHelper:
[8] Assertion failed: dims.nbDims == 4 || dims.nbDims == 5
[E] Failed to parse onnx file
[E] Parsing model failed
[E] Engine creation failed
[E] Engine set up failed

The log you can find below:
trt.log (618.9 KB)

The onnx model is a litte bit large, 249mb
pls download from here: 百度网盘 请输入提取码 pwd: jbr3

@290844930,

Thank you for sharing the ONNX model, but unable to download it. Looks like given one is a wrong pwd. Meanwhile we recommend you to please try on latest TensorRT version 8.0.1 and let us know if you still face this version.

Thank you.

@spolisetty Hello, Thanks for your reply. I updated the download link, please try again.

link: https://pan.baidu.com/s/1AnxICqHrlOuf4Iv2psQr_A pwd: 9ney

We dont have plan for tensorrt 8 yet because of our current production enviroment. But I will try it with tensorrt 8 firstly and report the result to you.

Hi @290844930,

Sorry we are facing difficulty to download model from baidu, its asking for installation of tools. Could you please share over google drive or dropbox.

which version of TensorRT are you currently using ?

Thank you.

@spolisetty

Hello, I uploded this model file to google drive. Please take a try:

We are using TensorRT-7.1.3.4

Hi @290844930,

Thank you for sharing the ONNX model, we are unable to reproduce the issue. We could successfully build TRT engine on 7.2.3.4 version and latest TRT version 8.

&&&& PASSED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=eqface_dy.onnx --minShapes=input:1x3x112x112 --optShapes=input:4x3x112x112 --maxShapes=input:8x3x112x112 --shapes=input:5x3x112x112 --verbose

Are using Jetson platform ?

Hi @spolisetty I am using tensorrt 7.1.3.4. Now I have problem to download 7.2.3.4. When I tried to lgoin in on Nvidia, the backend serviced seemed to crash down with this error:

{"errors":[], "error":{"zz":{"statuscode":"503","message":"Service Unavailable -- No backend server is available to handle this request."}}}

We need to deploy this onnx model later on Jetson Platform which only supports TensorRT 7.1.3.0 currently. If it only works on 7.2.3.4 later version, how could we handle this problem?

@290844930,

Please refer download link https://developer.nvidia.com/nvidia-tensorrt-download

If you’re interested, alternatively you can useTensoRT NGC container to avoid system dependencies.
https://docs.nvidia.com/deeplearning/tensorrt/container-release-notes/overview.html
https://ngc.nvidia.com/catalog/containers/nvidia:tensorrt

Thank you.

@spolisetty Today I downloaded the same version with yours, however it did not work any way. the log is attached below. what could be rhe reason for this?

trt_7.2.3.4.log (310.2 KB)

GPU version: RTX 2070super

Hi,

Please make sure you shared the same onnx model with us. Also please try polygraph tool for better debugging.
https://docs.nvidia.com/deeplearning/tensorrt/polygraphy/docs/index.html

Thank you.