Try to convert onnx to dynamic shape engine via trtexec,while not work

Description

use nemo output dynamic shape onnx, use trtexec output dynamic engine, use c++ API deserialize trt engine, while got error in enqueue.

Environment

TensorRT Version:8.2.3.0
GPU Type: a100
Nvidia Driver Version: 450.102
CUDA Version: 11.0
CUDNN Version: 8
Operating System + Version: ubuntu 20.04
Python Version (if applicable): use cpp
TensorFlow Version (if applicable):
PyTorch Version (if applicable): use nemo 1.4
Baremetal or Container (if container which image + tag):nvidia/cuda:11.0.3-cudnn8-devel-ubuntu20.04

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • with nemo output a conformer model, export it with torch.export input shape(B,D,T), B T is dynamic axis
  • convert onnx model into engine via trtexec, trtexec --onnx=/conformer.onnx --saveEngine=/conformer.trt --minShapes=audio_signal:1x80x100 --optShapes=audio_signal:16x80x1200 --maxShapes=audio_signal:16x80x1200 --shapes=audio_signal:16x80x1200 --workspace=10240
  • use c++ deserialize conformer.trt, setBindingDimensions(signal_binding, Dim3(B, D,T)); then context->enqueueV2(buffer,stream, nullptr);
  • ERROR:1: [runner.cpp::execute::416] Error Code 1: Cuda Runtime (invalid argument)

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

i am unable to share onnx model, thank u for ur reply, i ll try it immediately.

Could you please share with us trtexec --verbose logs for better debugging.
Also we recommend you to please use the latest TensorRT version.

Thank you.

Has this problem been solved?i met almost the same issue.

return None when run onnx.checker.check_model, which means the onnx model is OK.