Description
The fp16 engine generated on windows is stuck when infer in the linux(same environment). But the fp32 model generated on window runs normally on linux. In addition, the fp16 engine generated on linux also works fine on linux.
Environment
TensorRT Version: TRT861
GPU Type: 3070
Nvidia Driver Version: 537.13
CUDA Version: 12.1
CUDNN Version: 8.9.1
Operating System + Version: win10 + ubuntu22.02
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered## Description
(1)
Environment
TensorRT Version: TRT861
GPU Type: 3070
Nvidia Driver Version: 537.13
CUDA Version: 12.1
CUDNN Version: 8.9.1
Operating System + Version: win10 + ubuntu22.02
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered