One GPU is utilized 100% and Second GPU utilization is 0%

GPUs : 2 Nos - Tesla K80
CUDA: 10.0
Tensorflow-gpu - 1.15
keras: 2.2.5

I have started the training. Logs shows both the GPU devices are prepared and allocated. but nvidia-smi command shows as below

One GPU is under utilized - Screenshot

Please refer the screenshot, One GPU is 100% utilization and another GPU is 0 % Utilization. But Process id is shown as both the GPU ids associated with same process.

What is the issue here? Why GPU is under utilized?

Have you looked into this: https://www.tensorflow.org/guide/distributed_training

tf.distribute.Strategy is a TensorFlow API to distribute training across multiple GPUs, multiple machines or TPUs.

What is the output of:

import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))

the output of:

import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))

is as follows,

The output confirms that both GPUs are recognized by the framework. All CUDA-based libraries required by Tensoflow appear to be found. Now it will be up to you to configure the distribution of work to the two GPUs. This is not really a question about CUDA programming (the topic of this sub-forum), but rather about Tensorflow programming.

If you cannot figure out what needs to be done from Tensorflow documentation, you would want to ask in the Tensorflow support forum or mailing list. Tensorflow is a Google product, I think? I don’t use Tensorflow and know nothing about it.