Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED

The stack overflow post about the error is :

Like the link says, I just installed everything fresh, all the proper version, and its simply not working. I’ve looked through the code, its all good. I’ve been modifying it for my own purposes.

I don’t know where to ask, I feel like I’m stuck not being able to fully achieve my goals :(*


This could happen for a few reasons.

  1. As you mentioned, it may be a a memory issue, which you could try to verify by allocating less memory to the GPU and seeing if that error still occurs. You can do this in TF 2.0 like so (
import tensorflow as tf

# your model creation, etc.
model = MyModel(...)

I see the code you’re running sets dynamic memory growth if you have > 1 GPU (, but since you only have 1 GPU, then it is likely just trying to allocate all memory (>90%) at the start.

  1. Some users seem to have experienced this on Windows when there were other TensorFlow or similar processes using the GPU simultaneously, either by you or by other users:

  2. As always, make sure your PATH variables are correct. Sometimes if you tried multiple installations and didn’t clean things up properly, the PATHs may be finding the wrong version first, and using that, causing an issue. If you add new paths to the beginning of PATH, they should be found first:

  3. As mentioned on your Stack Overflow post by another user, you could try upgrading to a newer version of CUDNN, though I’m not sure this will help since your config is listed as supported on TF docs: If this does solve it, it may have been PATH issue after all since you will likely update the PATHs after installing the newer version.

I had a similar issue with the same error(CUDA 10.0, Windows 10). I realized that it was happening because I was running the program from within an IDE. Once I ran the program from the cmd, it worked perfect.

This worked for me.

From the tensorflow 2.0 documentation:

gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
    # Currently, memory growth needs to be the same across GPUs
    for gpu in gpus:
      tf.config.experimental.set_memory_growth(gpu, True)
    logical_gpus = tf.config.experimental.list_logical_devices('GPU')
    print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
  except RuntimeError as e:
    # Memory growth must be set before GPUs have been initialized