CUDA Out of Memory on RTX 3060 with TF/Pytorch

Dear NVIDIA developer team,

This week, I have recently just updated my graphic cards from rtx2060 to rtx3060 because it has more VRAM, so that I could train deep learning experiments faster.

The problem is, now, I cannot even training with the new GPU due to constant OOM issue. I have tested that both Pytorch (1.7.1cu11.0, 1.8.0cu11.1) and Tensorflow-gpu (2.4.3 cu11.1) give the same OOM error.

But from my observation, the GPU usage rises with Tensorflow-gpu (although in the end it cries OOM) to 9.xx G from the available 12GB of VRAM. However, I didn’t observe any spike in the GPU memory usage when using Pytorch-gpu.

Hence, I am wondering, is this might be an issue in the cuda driver itself, which probably doesn’t support RTX3060 (yet, since it is <1 month old)?

Reproduce the issue

I have tried this and this, but without much help.

To test pytorch, here.

To test tensorflow: (2.5 KB)
Error snapshot:

1 Like

Hi @briliantnugraha,

thanks for raising this issue.
If I understand the use case correctly, you are seeing an OOM error on your 3060 using the PyTorch 1.8.0+CUDA11.1 binaries (pip wheels or conda binaries) by running the CIFAR10 script?

If so, could you run a quick test and try to allocate a single tensor on this device via:

import torch

x = torch.randn(1024**3, device='cuda')

and check, if this would also run OOM?
This would allocate 4GB on your device and should work fine.

Since you are seeing an OOM using the CIFAR10 example, I guess the OOM might be a red herring, as this example should not use the complete device memory.