Cudnn Error in execute: 8

I run the sample code ./python/introductory_parser_samples/uff_resnet50.py and get the error

[TensorRT] ERROR: cuda/cudaConvolutionLayer.cpp (163) - Cudnn Error in execute: 8
[TensorRT] ERROR: cuda/cudaConvolutionLayer.cpp (163) - Cudnn Error in execute: 8
Traceback (most recent call last):
File “uff_resnet50.py”, line 153, in
main()
File “uff_resnet50.py”, line 133, in main
with build_engine_uff(uff_model_file) as engine:
AttributeError: exit

what does the error id of cudnn mean?
I used the .uff file given in ./python/data/resnet50/resnet50-infer-5.uff.

my environment as follow
ubuntu 16.04
Tensorrt 5.2.0.6
Cuda 9.0
Cudnn 7.3.1
GPU: RTX2080Ti
python 3.5.6

Hello,

using the latest TRT 19.03 container, I’m able to run the example.

root@42a0c7ebe7dc:/workspace/tensorrt/samples/python/introductory_parser_samples# python uff_resnet50.py -d /opt/tensorrt/data/resnet50/
WARNING: /opt/tensorrt/data/resnet50/samples/resnet50 does not exist. Trying /opt/tensorrt/data/resnet50/ instead.
Correctly recognized /opt/tensorrt/data/resnet50/tabby_tiger_cat.jpg as tabby

Recommend trying NVIDIA GPU Cloud (NGC) tensorrt optimized containers, which removes many of the host-side dependencies. NVIDIA NGC

Thank you very much. I tried the NGC TRT 19.03 version and run Resnet50 successfully.

I check the installed python dependencies in the container, and I found the tensorflow seems a non-gpu version of 1.13. Does it matter?

i did not use a tensorRT docker image, how can i solve this problem??@NVES

Any other solution without using Docker?