cuDNNv6: MNIST example compile errors

Dear all,

I’ve just installed cuDNNv6.0 and trying to compile/run example files. I was able to execute RNN example smoothly.
However, for cudnn_samples_v6/mnistCUDNN example, I got some compile errors from make (see details below).
It appears I have something wrong with cuda_runtime_api.h enum declarations. Q: any suggestions?

compile error details
nvcc warning : The ‘compute_20’, ‘sm_20’, and ‘sm_21’ architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
/usr/local/cuda/bin/nvcc -ccbin g++ -I/usr/local/cuda/include -IFreeImage/include -m64 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_53,code=compute_53 -o fp16_dev.o -c fp16_dev.cu
g++ -I/usr/local/cuda/include -IFreeImage/include -o fp16_emu.o -c fp16_emu.cpp
g++ -I/usr/local/cuda/include -IFreeImage/include -o mnistCUDNN.o -c mnistCUDNN.cpp
In file included from /usr/local/cuda/include/channel_descriptor.h:62:0,
from /usr/local/cuda/include/cuda_runtime.h:90,
from /usr/include/cudnn.h:64,
from mnistCUDNN.cpp:30:
/usr/local/cuda/include/cuda_runtime_api.h:1628:101: error: use of enum ‘cudaDeviceP2PAttr’ without previous declaration
extern host cudart_builtin cudaError_t CUDARTAPI cudaDeviceGetP2PAttribute(int *value, enum cudaDeviceP2PAttr attr, int srcDevice, int dstDevice);

In file included from /usr/local/cuda/include/channel_descriptor.h:62:0,
from /usr/local/cuda/include/cuda_runtime.h:90,
from /usr/include/cudnn.h:64,
from mnistCUDNN.cpp:30:
/usr/local/cuda/include/cuda_runtime_api.h:5382:92: error: use of enum ‘cudaMemoryAdvise’ without previous declaration
extern host cudaError_t CUDARTAPI cudaMemAdvise(const void *devPtr, size_t count, enum cudaMemoryAdvise advice, int device);

/usr/local/cuda/include/cuda_runtime_api.h:5438:98: error: use of enum ‘cudaMemRangeAttribute’ without previous declaration
extern host cudaError_t CUDARTAPI cudaMemRangeGetAttribute(void *data, size_t dataSize, enum cudaMemRangeAttribute attribute, const void *devPtr, size_

/usr/local/cuda/include/cuda_runtime_api.h:5474:102: error: use of enum ‘cudaMemRangeAttribute’ without previous declaration
extern host cudaError_t CUDARTAPI cudaMemRangeGetAttributes(void **data, size_t *dataSizes, enum cudaMemRangeAttribute *attributes, size_t numAttribute

Makefile:200: recipe for target ‘mnistCUDNN.o’ failed
make: *** [mnistCUDNN.o] Error 1

I don’t have any trouble with it:

$ make
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
/usr/local/cuda/bin/nvcc -ccbin g++ -I/usr/local/cuda/include -IFreeImage/include  -m64    -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_53,code=compute_53 -o fp16_dev.o -c fp16_dev.cu
g++ -I/usr/local/cuda/include -IFreeImage/include   -o fp16_emu.o -c fp16_emu.cpp
g++ -I/usr/local/cuda/include -IFreeImage/include   -o mnistCUDNN.o -c mnistCUDNN.cpp
/usr/local/cuda/bin/nvcc -ccbin g++   -m64      -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_53,code=compute_53 -o mnistCUDNN fp16_dev.o fp16_emu.o mnistCUDNN.o  -LFreeImage/lib/linux/x86_64 -LFreeImage/lib/linux -lcudart -lcublas -lcudnn -lfreeimage -lstdc++ -lm
$ ls
data  error_util.h  fp16_dev.cu  fp16_dev.h  fp16_dev.o  fp16_emu.cpp  fp16_emu.h  fp16_emu.o  FreeImage  gemv.h  Makefile  mnistCUDNN  mnistCUDNN.cpp  mnistCUDNN.o  readme.txt
$

maybe you have a corrupted CUDA 8 install. Are you using CUDA 8.0.61 ?

Thanks for the response txbob.

I had uninstalled CUDA, re-installed with cuda_8.0.61_375.26_linux.run and deviceQuery result was a PASS as shown below.

Peer access from Tesla K40c (GPU0) → Tesla K40c (GPU1) : Yes
Peer access from Tesla K40c (GPU0) → GeForce GT 610 (GPU2) : No
Peer access from Tesla K40c (GPU1) → Tesla K40c (GPU0) : Yes
Peer access from Tesla K40c (GPU1) → GeForce GT 610 (GPU2) : No
Peer access from GeForce GT 610 (GPU2) → Tesla K40c (GPU0) : No
Peer access from GeForce GT 610 (GPU2) → Tesla K40c (GPU1) : No

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 3, Device0 = Tesla K40c, Device1 = Tesla K40c, Device2 = GeForce GT 610
Result = PASS

Here is additional version info

cat /usr/local/cuda-8.0/version.txt → CUDA Version 8.0.61
nvcc --version → Cuda compilation tools, release 7.5, V7.5.17

So, you have a corrupted install.

Follow the linux install guide instructions carefully.

Yes, CUDA and compiler version conflict was a root cause… with the new installs, it compiles OK!

For what it is worth, I am having the same problem compiling Caffe2 and I saw someone had the same problem with Theano.

https://github.com/Theano/Theano/issues/5856

OK, I tried their suggestion and it worked.
Open the file:
/usr/include/cudnn.h

And try change the line:
#include “driver_types.h”

to:
#include <driver_types.h>

Changing the include style of driver_types.h fixed the problem for me!

ralph058’s solution worked for me, +1

Eternally thankful to ralph058! Immediately fixed the issue.

ralph058’s solution also worked for me, +1

Open the file:
/usr/include/cudnn.h

And try change the line:
#include “driver_types.h”

to:
#include <driver_types.h>

ralph058 also worked for me

Good fix. Worked for me too +1

Worked for me too, thanks!

Hi, looks like I have a similar issue.
I have install cuda on my ubuntu 16.04 server using apt-get install cuda (9.1.85).

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2015 NVIDIA Corporation
Built on Tue_Aug_11_14:27:32_CDT_2015
Cuda compilation tools, release 7.5, V7.5.17

Then I install cudnn after downloading from the official web site (cudnn-9.1-linux-x64-v7.tgz).
http://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installlinux-tar

When I try to test my installation, I first had an error related to the g++ compiler. I installed an older g++ compiler and managed to compile it with no error. Unfortunately when I run the test, I get the following:

./mnistCUDNN
cudnnGetVersion() : 7005 , CUDNN_VERSION from cudnn.h : 7005 (7.0.5)
Cuda failurer version : GCC 7.2.0
Error: unknown error
error_util.h:93
Aborting...

Can anyone help me with that issue?

the file usr/include/cudnn.h is read only
how can I edit it?

sudo gedit /usr/include/cudnn.h

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:
https://devtalk.nvidia.com/default/board/301/deep-learning-training-and-inference-/

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.

-Siddharth