When running the same deep learning service on servers in various environments using nvidia-docker, each consumes different vrams of gpu.
We ran it in a total of three environments, each experimental environment being as follows.
graphic card : rtx2060
nvidia-driver : 460.80
host cuda : 11.0
docker cuda : 11.2
Consumption resources: 1.3 gb
graphic card : rtx2070, rtx2080ti
nvidia-driver : 450.102.04
host cuda : 10.1
docker cuda : 11.2
Consumption resources: 1.1 gb
graphic card : rtx3090
nvidia-driver : 465.27
host cuda : 11.2
docker cuda : 11.2
Consumption resources: 1.7 gb
The expected problem is the nvidia-driver version or host cuda version.
But I want to check the problem accurately.
Is there anyone who knows about this?