Description
TensorRT 7 not supported switch of GPU devices. I am currently accelerating the TensorRT based on the Yolov5 model, and then I can run through the 0 card on my server normally. However, when I switch to the 1 card, I will encounter the following problems:
[09/29/2020-10:57:03] [E] [TRT] …/rtSafe/safeContext.cpp (133) - Cudnn Error in configure: 7 (CUDNN_STATUS_MAPPING_ERROR)
[09/29/2020-10:57:03] [E] [TRT] FAILED_EXECUTION: std::exception
Environment
TensorRT Version: 7.0.0.11
GPU Type: 2080ti
Nvidia Driver Version: 418.67
CUDA Version: 10
CUDNN Version: 7.6.5
Operating System + Version: centos7
Python Version (if applicable): 3.6
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered