Issues with Tensorflow on CUDA10 and RTX2080

I’ve built Tensorflow 1.13 both from source (including enabling CUDA compute capability 7.5), and also tried the PyPI nightly build:

When I point Tensorflow 1.13 w/CUDA10 at my GTX 1080Ti Tensorflow’s CUDA RNNs run fine.

However, when I point the same software config at my RTX 2080 I get the following error:

2019-01-23 11:00:20.842143: I tensorflow/stream_executor/platform/default/dso_loader.cc:154] successfully opened CUDA library libcublas.so.10.0 locally
2019-01-23 11:00:20.997636: I tensorflow/stream_executor/platform/default/dso_loader.cc:154] successfully opened CUDA library libcudnn.so.7 locally
2019-01-23 11:00:21.670415: E tensorflow/stream_executor/cuda/cuda_dnn.cc:493] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
2019-01-23 11:00:21.670478: W tensorflow/core/framework/op_kernel.cc:1412] OP_REQUIRES failed at cudnn_rnn_ops.cc:1280 : Unknown: Fail to find the dnn implementation.
Traceback (most recent call last):
File “./lstm.py”, line 135, in
train_network()
File “./lstm.py”, line 33, in train_network
train(model, network_input, network_output)
File “./lstm.py”, line 132, in train
model.fit(network_input, network_output, epochs=1000, batch_size=64, callbacks=callbacks_list)
File “/home/mulderg/.local/lib/python3.6/site-packages/keras/engine/training.py”, line 1039, in fit
validation_steps=validation_steps)
File “/home/mulderg/.local/lib/python3.6/site-packages/keras/engine/training_arrays.py”, line 199, in fit_loop
outs = f(ins_batch)
File “/home/mulderg/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py”, line 2715, in call
return self._call(inputs)
File “/home/mulderg/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py”, line 2675, in _call
fetched = self._callable_fn(*array_vals)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py”, line 1440, in call
run_metadata_ptr)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/errors_impl.py”, line 544, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.UnknownError: Fail to find the dnn implementation.
[[{{node cu_dnnlstm_1/CudnnRNN}}]]
[[loss/mul/_135]]

Here’s my Ubuntu 18.04 installed packages:

dpkg -l | awk ‘/cuda/ {printf("%50s %20s\n", $2, $3)}’
cuda 10.0.130-1
cuda-10-0 10.0.130-1
cuda-command-line-tools-10-0 10.0.130-1
cuda-compiler-10-0 10.0.130-1
cuda-cublas-10-0 10.0.130-1
cuda-cublas-dev-10-0 10.0.130-1
cuda-cudart-10-0 10.0.130-1
cuda-cudart-dev-10-0 10.0.130-1
cuda-cufft-10-0 10.0.130-1
cuda-cufft-dev-10-0 10.0.130-1
cuda-cuobjdump-10-0 10.0.130-1
cuda-cupti-10-0 10.0.130-1
cuda-curand-10-0 10.0.130-1
cuda-curand-dev-10-0 10.0.130-1
cuda-cusolver-10-0 10.0.130-1
cuda-cusolver-dev-10-0 10.0.130-1
cuda-cusparse-10-0 10.0.130-1
cuda-cusparse-dev-10-0 10.0.130-1
cuda-demo-suite-10-0 10.0.130-1
cuda-documentation-10-0 10.0.130-1
cuda-driver-dev-10-0 10.0.130-1
cuda-drivers 410.48-1
cuda-gdb-10-0 10.0.130-1
cuda-gpu-library-advisor-10-0 10.0.130-1
cuda-libraries-10-0 10.0.130-1
cuda-libraries-dev-10-0 10.0.130-1
cuda-license-10-0 10.0.130-1
cuda-memcheck-10-0 10.0.130-1
cuda-misc-headers-10-0 10.0.130-1
cuda-npp-10-0 10.0.130-1
cuda-npp-dev-10-0 10.0.130-1
cuda-nsight-10-0 10.0.130-1
cuda-nsight-compute-10-0 10.0.130-1
cuda-nvcc-10-0 10.0.130-1
cuda-nvdisasm-10-0 10.0.130-1
cuda-nvgraph-10-0 10.0.130-1
cuda-nvgraph-dev-10-0 10.0.130-1
cuda-nvjpeg-10-0 10.0.130-1
cuda-nvjpeg-dev-10-0 10.0.130-1
cuda-nvml-dev-10-0 10.0.130-1
cuda-nvprof-10-0 10.0.130-1
cuda-nvprune-10-0 10.0.130-1
cuda-nvrtc-10-0 10.0.130-1
cuda-nvrtc-dev-10-0 10.0.130-1
cuda-nvtx-10-0 10.0.130-1
cuda-nvvp-10-0 10.0.130-1
cuda-repo-ubuntu1804-10-0-local-10.0.130-410.48 1.0-1
cuda-runtime-10-0 10.0.130-1
cuda-samples-10-0 10.0.130-1
cuda-toolkit-10-0 10.0.130-1
cuda-tools-10-0 10.0.130-1
cuda-visual-tools-10-0 10.0.130-1
libcudnn7 7.4.2.24-1+cuda10.0
libcudnn7-dev 7.4.2.24-1+cuda10.0
nccl-repo-ubuntu1804-2.3.7-ga-cuda10.0 1-1

Same issue here: RTX 2070, tf-nightly-gpu and cuda 10+cudnn (7.3 -7.4 any will fail)

I’m also experiencing this with cuda 10 + cudnn 7.4 + tf-nightly-gpu + RTX 2070.

I ended up fixing this issue with the allow_growth = True comment on https://github.com/tensorflow/tensorflow/issues/24496