I’ve loaded a weight file using memory fraction 1.5GB on pc_1~pc_3. I tested that load same weight file using memory fraction 1.5GB on pc_4 yesterday, but couldn’t load. however, I could load it using memory fraction about 5.7GB on pc_4.
although same weight file, why require more gpu memory?
I couldn’t find solutions anywhere. I guess rtx 30xx serise or nvidia-tensorflow is the reason.
I am having similar gpu memory issue in my rtx 3080. I also tried using NGC-Tensorflow-Containers, but I am still facing the memory issue. The models are occupying more memory in 3080 as compared to 2080ti.
Hello @Meet.
I’m sorry for late reply.
I found the reason.
The reason is that the graphics card architecture is different.
It is recommended to use the version of tensorflow that supports 3080 graphics card.
If you want to use the tf 1.x version model, recommend to load the tf 1.x version model into the tf 2.x version.
You can load the model using the tensorflow.compat.v1 API.