System: Dell XPS17 laptop with NVIDIA GeForce RTX 3060 Max-Q (6GB GDDR6 vram)
I’m using an RTX 3060 to accelerate the neural network for a DQN agent. I’m running into major obstacles with GPU memory limits. Based on the below outputs, it seems like I’m missing 2.6 GB of vram for some reason. Any help would be greatly appreciated!
Running the following to initialize a tensorflow/keras model:
from keras.models import Sequential
model = Sequential()
Prints the following to console:
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3475 MB memory: → device: 0, name: NVIDIA GeForce RTX 3060 Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6
When the model becomes too large and the GPU runs out of memory, this is printed to console:
Limit: 3643801600
InUse: 2763396352
MaxInUse: 3462511104
NumAllocs: 185
MaxAllocSize: 1610612736
Reserved: 0
PeakReserved: 0
LargestFreeBlock: 0