CUDNN_STATUS_INTERNAL_ERROR when using cudnn7.0 with CUDA 8.0

Hi gays,

I am trying to install cudnn 7.0 on my laptop. The installation is fine, but when I try to run test samples, an error appear.

cudnnGetVersion() : 7003 , CUDNN_VERSION from cudnn.h : 7003 (7.0.3)
Host compiler version : GCC 5.4.0
There are 1 CUDA capable devices on your machine :
device 0 : sms 10  Capabilities 5.2, SmClock 1038.0 Mhz, MemSize (Mb) 6078, MemClock 2505.0 Mhz, Ecc=0, boardGroupID=0
Using device 0

Testing single precision
CUDNN failure
Error: CUDNN_STATUS_INTERNAL_ERROR
mnistCUDNN.cpp:394
Aborting...

Have anyone encountered similar problem before? How could I fix it? Thank you very much.

Im experiencing the exact same problem

Im assuming we have to revert to gcc 5.3 version, which is a painful thing to do, given my knowledge of the compiler set-up

I’m also having the same problem with Cuda 9 and cudnn 7.0. Is there a solution to this?

Hi mjian080/zhuoqchang,

Can you share your driver version and cuda verison?

The driver version is 384.90 and Cuda version 9. I tried with Cuda 8 and get the same error message. This seems to be an issue with cudnn since the cuda samples run fine.

Hmm, i am not sure because all i wanted was to set up tensorflow-gpu, which i got it fixed

anyway, this might not be related to ur issues, but make sure u have

Cuda compiler, the version whatever u want
Cudnn toolkit, the version matching Cuda
Cuda driver, this is funny cause the documentation assuemed u have Cuda driver? which is not true

and u need all those to work i think?

Hi Guys,

All I wanted is setting up for tensorflow, too. It seems that tensorflow only works on CUDA 8.0 with cudnn 6.0.

I am using these settings right now:
CUDA 8.0
cudnn 6.0
NVIDIA Driver 375.82
Ubuntu 16.04

And everything works fine for now.

It turns out that CUBLAS was not working due to a corrupt cache. I fixed my issue using the following command

sudo rm -rf .nv/

what happens if you run the MNIST test sample as root? Do you still get the error?

Hi txbob,

Thanks for your reply. I have solved the problem by deleting cache with this command

sudo rm -rf ~/.nv/

Btw, how could I mark this thread as solved?

thanks txbob, I tried with sudo and the problem got solved

thanks txbob, I tried with sudo and the problem got solved. I was trying out different cuDNN versions.

nvcc -V
Cuda compilation tools, release 8.0, V8.0.61

uname -a
Linux kiran-Z370-HD3P 4.10.0-40-generic #44~16.04.1-Ubuntu SMP Thu Nov 9 15:37:44 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

cuDNN 6

Did you fixed your problem? I am using CUDA 9.0 with CuDNN 7.0.5 as well.

cudnnGetVersion() : 7005 , CUDNN_VERSION from cudnn.h : 7005 (7.0.5)
Cuda failurer version : GCC 5.4.0
Error: CUDA driver version is insufficient for CUDA runtime version
error_util.h:93
Aborting...

I have the same issues…
nvcc -V:
Cuda compilation tools, release 9.0, V9.0.176

Linux XXXX 4.13.0-41-generic #46~16.04.1-Ubuntu SMP Thu May 3 10:06:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

OS: Ubuntu 16.04
GPU: GTX 1080 Ti
CUDA: Cuda compilation tools, release 9.0, V9.0.176
cuDNN: 7.1.3
driver version: NVIDIA-SMI 384.111 Driver Version: 384.111

I have passed CUDA test
./deviceQuery
./bandwithTest

I need to add sudo to pass mnistCUDNN test

No use for the following commands:
sudo usermod -a -G nvidia-persistenced $USER
sudo rm -rf ~/.nv/

I could not use CuDNN to run Tensorflow now.

Please advice!

Hi I have the same issue with

CUDA 9.0
cudnn 7.0
NVIDIA Driver 384.145 (but also tried 390)
Ubuntu 16.04
GPU 720M

Rebooting or using sudo for the cdnn test is not working

I still get this

cudnnGetVersion() : 7005 , CUDNN_VERSION from cudnn.h : 7005 (7.0.5)
Host compiler version : GCC 5.4.0
There are 1 CUDA capable devices on your machine :
device 0 : sms 2 Capabilities 2.1, SmClock 1250.0 Mhz, MemSize (Mb) 1985, MemClock 800.0 Mhz, Ecc=0, boardGroupID=0
Using device 0

Testing single precision
CUDNN failure
Error: CUDNN_STATUS_ARCH_MISMATCH
mnistCUDNN.cpp:394
Aborting…

Thanks a lot for your help

Also from where are you running

sudo rm -rf ~/.nv/

thanks