Verifying if GPU is actually used in Keras/Tensorflow, not just verified as present

I’ve just built a deep learning rig (AMD 12 core threadripper; GeForce RTX 2080 ti; 64Gb RAM). I originally wanted to install CUDnn and CUDA on Ubuntu 19.0, but the installation was too painful and after reading around a bit, I decided to switch to Windows 10…

After doing several installs of tensorflow-gpu, in and outside condas, I ran into further issues which I assumed was down to the CUDnn-CUDA-tensorflow compatibility, so uninstalled various versions of CUDA and tf. My output from nvcc --version :

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:04_Central_Daylight_Time_2018
Cuda compilation tools, release 10.0, V10.0.130

I also have:

 if tf.test.gpu_device_name():
        print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
    else:
        print("Please install GPU version of TF")
    print("keras version: {0} | Backend used: {1}".format(keras.__version__, backend.backend()))
    print("tensorflow version: {0} | Backend used: {1}".format(tf.__version__, backend.backend()))
    print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
    print("CUDA: {0} | CUDnn: {1}".format(tf_build_info.cuda_version_number,  tf_build_info.cudnn_version_number))

with output:

My device: [name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 12853915229880452239
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 9104897474
lo

    cality {
      bus_id: 1
      links {
      }
    }
    incarnation: 7328135816345461398
    physical_device_desc: "device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:42:00.0, compute capability: 7.5"
    ]
    Default GPU Device: /device:GPU:0
    keras version: 2.3.1 | Backend used: tensorflow
    tensorflow version: 2.1.0 | Backend used: tensorflow
    Num GPUs Available:  1
    CUDA: 10.1 | CUDnn: 7

I ran the same deep learning script for my MacBoo Pro and my new Desktop. Bearing in mind the Desktop has a cuDNN enabled GPU, and the MBP doesn’t, running the same script too about the same amount of time (about a minute) on both. It’s a short script, but relating to the model (and so epoch and batch size) I have:

model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
model.add(Conv1D(32, 3, padding='same', activation='relu'))
model.add(MaxPooling1D())
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
model.fit(X_train, y_train, epochs=3, batch_size=64)

The dataset I’m retrieving from here:

from keras.datasets import imdb

Can anyone explain if I need to reconfigure something extra, or if I’ve overlooked something in my setup? Perhaps someone knows of a nice deep learning dataset and script that clearly differentiates between the GPU and CPU only runs.