Failed to run tensorrt sample with python API with the GPU GeForce 1050Ti

Description

  1. Failed to run tensorrt python samples with the GPU GeForce gtx 1050Ti, However it works fine with C++ samples
  2. successfully run the same tensorrt python sample and C++ sample with GPU GeForce RTX 2080Ti via a saved docker from 1

occurred error:
[TensorRT] ERROR: …/rtSafe/safeContext.cpp (105) - Cudnn Error in initializeCommonContext: 1 (Could not initialize cudnn, please check cudnn installation.)

Environment

TensorRT Version: 7.0.0.11
GPU Type: GeForce gtx 1050Ti
Nvidia Driver Version: 410.78
CUDA Version: 10.0
CUDNN Version: 7.6.5
Operating System + Version: ubuntu 16.04
Python Version (if applicable): 3.7.1
TensorFlow Version (if applicable): none
PyTorch Version (if applicable): 1.4.0
Baremetal or Container (if container which image + tag): Lenovo Thinkpad X1 Extreme

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/7.0/7.0.0.11/tars/TensorRT-7.0.0.11.Ubuntu-16.04.x86_64-gnu.cuda-10.0.cudnn7.6.tar.gz

Steps To Reproduce

  1. install cuda10.0, cudnn7.6.5, tensorrt 7.0.0.11
  2. run sample in TensorRT-7.0.0.11/samples/python/network_api_pytorch_mnist

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @huangheqingbit,
This error is likely due to either (1) mismatched versions of libraries/dependencies, or (2) sometimes this can happen due to Out Of Memory (OOM) errors.
We recommend you to try the same on NGC containers to avoid system dependency related issues.
Thanks!