Why need a more gpu memory rtx than gtx?

I want to know reasons.

[environment]

pc_1 : gtx 1080ti(11G), cuda-10, tensorflow-gpu==1.13
pc_2 : rtx 2080ti(11G), cuda-10, tensorflow-gpu==1.15
pc_3 : rtx 2080ti(11G), cuda-11.0, tensorflow-gpu==1.15
pc_4 : rtx 3080(10G), cuda-11.1, nvidia-tensorflow==r1.15.4-20.11

I’ve loaded a weight file using memory fraction 1.5GB on pc_1~pc_3. I tested that load same weight file using memory fraction 1.5GB on pc_4 yesterday, but couldn’t load. however, I could load it using memory fraction about 5.7GB on pc_4.

although same weight file, why require more gpu memory?
I couldn’t find solutions anywhere. I guess rtx 30xx serise or nvidia-tensorflow is the reason.

Hey @god.donghwan,

I am having similar gpu memory issue in my rtx 3080. I also tried using NGC-Tensorflow-Containers, but I am still facing the memory issue. The models are occupying more memory in 3080 as compared to 2080ti.

Did you find any reasons behind this?

Hello @Meet.
I’m sorry for late reply.
I found the reason.
The reason is that the graphics card architecture is different.

It is recommended to use the version of tensorflow that supports 3080 graphics card.

If you want to use the tf 1.x version model, recommend to load the tf 1.x version model into the tf 2.x version.
You can load the model using the tensorflow.compat.v1 API.

good luck!