tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR

gpus = tf.config.experimental.list_physical_devices(‘GPU’)
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices(‘GPU’)
print(len(gpus), “Physical GPUs,”, len(logical_gpus), “Logical GPUs”)
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)

I think this piece of code from here (Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED - #2 by NVES_R) might help you as it did in my case.