Same program Different memory usage on Different GPU

  1. i use both two type of gpu for predicting, 2060 and 1080Ti, using tensorflow1.15.0. The program is the same, but cost gpu memory is different, see the images. 2060 cost much larger memory. they have same cuda version, cudnn version, driver version, but 2060 use nvidia-tensorflow and 1080ti use common tensorflow. i want to know how this happens.
  2. i have used another tool, gpustat to monitor gpus. and i clone yolov3 from github, GitHub - qqwweee/keras-yolo3: A Keras implementation of YOLOv3 (Tensorflow backend), and test in both gpu. and results are similar, 2060 cost more memory. by the way, i set tensorflow use necessary memory instead cost all memory by code:
    gpu_options = tf.GPUOptions(allow_growth=True)
    sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

You might want to check with the authors of the applications / middleware you are using.

There is nothing that prevents any given program from using different amounts of GPU memory on different hardware platforms. It might be running entirely different code paths on different architectures, and/or make different trade-offs between memory usage and performance depending on GPU properties.

FWIW, I personally do not quite trust various tools that claim to show per-app use of GPU memory.

I have tested tensorflow and paddlepaddle, maybe i should test pytorch too. Both smi and gpustat show similar memory usage, maybe they are not reliable. But i run the program on 2060 will show the warning that cannot allocate enough memory. And I open 4 processes to run the program on 1080ti, there are no warning about allocating memory. So i am confused