Docker cuda:10.0-cudnn7-devel-ubuntu16.04 - 3 out of 4 GPUs not utilized

Environment:
Docker Image : cuda:10.0-cudnn7-devel-ubuntu16.04
Total GPUs : 4 nos V100
CUDA: 10.2
Tensorflow-gpu - 1.15
keras: 2.2.5

I have started the training. Logs shows all 4 GPU devices are prepared and allocated. but nvidia-smi command shows as below

3 GPUs is not Utilized at all

Please refer the screenshot, One GPU is 63% utilization and other three GPUs are 0 % Utilization. But There is no process id list.

What is the issue here? Why three GPUs are not utilized and empty process list?