Onnx to tensorrt inference fail

Description

The engine I generated on the gtx1080ti graphics card, the reasoning was normal, but I switched to the engine generated on the rtx3060, and the reasoning results were wrong.

Environment

TensorRT Version: 8.0.1.6
GPU Type: rtx3060
Nvidia Driver Version: 471.68
CUDA Version: 11.3
CUDNN Version: 8.2.1
Operating System + Version: windows10
Python Version (if applicable): no
TensorFlow Version (if applicable): no
PyTorch Version (if applicable): 1.4
Baremetal or Container (if container which image + tag):

Relevant Files

sorry,onnx file is too large to upload successfully

Steps To Reproduce

gtx1080ti configs: cuda10.2,cudnn8.1 , tensorrt 8.0.1.6 ,it works.

cmd:
trtexec.exe --explicitBatch --onnx=test.onnx --saveEngine=test.eng --verbose=true

Hope to give some suggestions

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!