Docker Image : cuda:10.0-cudnn7-devel-ubuntu16.04 -Training performance low - GPU Util% keep changing - Process Id section empty

Environment:
Docker Image : cuda:10.0-cudnn7-devel-ubuntu16.04
Total GPUs : 4 nos Tesla V100 - GPU Memory 16.2 GB
CUDA: 10.2
Tensorflow-gpu - 1.15
keras: 2.1.3

Current behavior

nvidia-smi shows as below - that is all GPUs Utilisation is above 90% for few seconds

gpu all above 90 %

Then nvidia-smi window shows as follows, that is all GPUs Utilisation is 0% for few seconds
Gpu 0%

Then nvidia-smi window shows as follows, that is Utilisation% is random in all GPUs Utilisation for few seconds
gpu random %

I have noticed Training performance is low and taking longer duration in Docker container mentioned here. Same code is working fine in Tesla K80 2 GPUs with CUDA 10.0 on a Dedicated server as shown below.

Screen Shot 2020-10-10 at 9.34.06 PM

But in Docker container, Why GPU utilization% is keep changing unusually and why process list section is empty? Why CUDA version is shown a 10.2 in the cuda:10.0-cudnn7-devel-ubuntu16.04 Image refer first three nvidia-smi schreen shots? I have not installed CUDA 10.2 toolkit and CuDNN libraries in the Image. How can i solve this issue?

ps aux command shows processids but nvidia-smi doesn’t show