Vram consumption varies by computer environment

When running the same deep learning service on servers in various environments using nvidia-docker, each consumes different vrams of gpu.

We ran it in a total of three environments, each experimental environment being as follows.

graphic card : rtx2060
nvidia-driver : 460.80
host cuda : 11.0
docker cuda : 11.2
Consumption resources: 1.3 gb

graphic card : rtx2070, rtx2080ti
nvidia-driver : 450.102.04
host cuda : 10.1
docker cuda : 11.2
Consumption resources: 1.1 gb

graphic card : rtx3090
nvidia-driver : 465.27
host cuda : 11.2
docker cuda : 11.2
Consumption resources: 1.7 gb

The expected problem is the nvidia-driver version or host cuda version.

But I want to check the problem accurately.

Is there anyone who knows about this?

Hi @whoo91121 ,
Apologies for delayed response.
While referring to Consumption resources does that means workspace? If yes, then it’s not a surprise as different kernels might be picked on different platforms, and therefore the workspace sizes consumed can also be different.