batch normalization: CUDNN_STATUS_NOT_SUPPORTED

Dear all,

When I was training a simple Multilayer perceptron network, I got an error like:

failed to enqueue forward batch normalization on stream: CUDNN_STATUS_NOT_SUPPORTED

then

cuDNN launch failure: input shape(977873, 1, 1, 50)

I used the batch normalization technique. It also seems that the input shape is not correct, the input batch was set to be -1(any, here is correctly 977873) and the feature number should be 31(not 50), it has 2 dimensions, 50 is the number of input layer units. So I think when tf extends the inputs to be 4D, is should be (977873, 1, 1, 31) but not (977873, 1, 1, 50).
Btw I think the CUDA and cuDNN installed successfully since other scripts which didn’t contain bachnorm module ran fine.
My environment:
tensorflow 1.6.0
cuda 9.0
cudnn 7.0.5
GPU GTX 1050 ti
ubuntu 16.04 LTS

any reply is welcome.
best

1 Like

I find the reason:
when I used the tensorflow 1.3, the tf.contrib.layers.batch_norm has the parameter ‘fused’ default as ‘False’, but in tf 1.6.0, it becomes ‘None’, so when using tf 1.6, set ‘fused=False’, everything will be fine.

Really helps! Thanks for your solution!