Problem of conversion lite-hrnet onnx to trt engine

Hi, recently I met a problem when converting lite-hrnet to trt engine, the platform used on is 2080Ti and the version used is TensorRT-7.1.3.4; cuda-10.2; cudnn-8.0.5.
The error is shown below and the attachment is the onnx file
[TensorRT] ERROR: …/builder/cudnnBuilderGraphOptimizer.cpp (3121) - Assertion Error in mergeDAG: 0 (d1Inputs.size() == n1.inputs.size())
lite_hrnet_18_sim.onnx (4.5 MB)
Looking forward to your response. Thanks.

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

@775661382,

We are unable to reproduce the issue. We tried on TensorRT version 7.2.3 and 8.0 EA.
We recommend you to please try on latest TensorRT version. If you still face the issue, please share trtexec “”–verbose"" logs for better debugging.

If you are unable to setup locally, you can also use latest TensorRT NGC container.
https://ngc.nvidia.com/containers/nvidia:tensorrt

Thank you.

Do you solve this problem? I met the same problem.