TensorRT Cudnn Error

Description

Hi, my C++ program contains TensoRT inference model and Libtorch model inference. And sometimes it will crash and show the error information as following. My program runs as sequential execution. And the error does not appear always. Could you give me some advice? Thanks very much.

[01/06/2021-06:16:43] [E] [TRT] …/rtSafe/cuda/cudaPoolingRunner.cpp (211) - Cudnn Error in execute: 8 (CUDNN_STATUS_EXECUTION_FAILED)
[01/06/2021-06:16:43] [E] [TRT] FAILED_EXECUTION: std::exception

Environment

TensorRT Version : 7.2.1
GPU Type : P2000
CUDA Version : 10.2
CUDNN Version : 8.0.4
Operating System + Version : Ubuntu 18.04
Program Language : C++
LibTorch Version : 1.6.0

Hi @jiangnan,

Please try NVIDIA GPU Cloud (NGC) tensorrt optimized containers, which removes many of the host-side dependencies.
https://ngc.nvidia.com/signin

Thank you.