The score output by the model, the result from torch, and the output from TensorRT 10.10 on x86 differ significantly?

Description


![f9dcd100baa1cd119f748be1b512c8fcc2ce2d35|432x500](upload://hPdGuctxiTMlL8GxaOLQqIlYF9h.png)

torch score : 0.9972,  0.9995,  0.0021, 0.010,  **0.9948,** 0.0013, 0.0011,0.9904, 0.0135.
trt score :     0.9971,  0.9995,  0.000,   0.000,   **0.1963**,  0.000,   0.000, 0.9902,  0.000, 

With the same input, comparing the scores of the output, the fifth score shows a significant decrease in trt. What could be the reason for this?

Environment

TensorRT Version:10.10
GPU Type: A100
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable): 3.10
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered