cuDNN Error

I have set up CUDA 8 and have tensorflow-gpu 1.4. When I run a deep learning model in tensorflow I get the following output/error:

2019-01-09 20:51:18.573507: I tensorflow/core/platform/] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-01-09 20:51:18.687853: I tensorflow/stream_executor/cuda/] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-01-09 20:51:18.688252: I tensorflow/core/common_runtime/gpu/] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683
pciBusID: 0000:01:00.0
totalMemory: 10.91GiB freeMemory: 10.69GiB
2019-01-09 20:51:18.688266: I tensorflow/core/common_runtime/gpu/] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
Number of files used: 1
Number of files used: 1
Global step 0 :
2019-01-09 20:51:21.993997: E tensorflow/stream_executor/cuda/] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
2019-01-09 20:51:21.994042: E tensorflow/stream_executor/cuda/] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
2019-01-09 20:51:21.994061: F tensorflow/core/kernels/] Check failed: stream->parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo(), &algorithms)
Aborted (core dumped)

From the error, I think the issue is with the cuDNN install. When I look in /usr/include/cudnn.h I see that the version is 6. I’m really confused as to what the issue can be and what I can do to resolve it. I believe my CUDA is set up correctly and I Just need to fix cuDNN but I’m not quite sure how to.

I have the same problem, do you solve it? thank you