CUDA running out of memory when training a classifier

Hi.

I have an XPS-15 7590 with NVIDIA GeForce GTX 1650 4GB. It has 64GB of RAM and 2TB of storage. I bought this laptop so that it would help me with Deep Learning. But now whenever I try to run a notebook to train a classifier, it reports a CUDA out of memory error because the dataset is too large for the GPU. Much of the 4GB is taken up by PyTorch.

RuntimeError: CUDA out of memory. Tried to allocate 196.00 MiB (GPU 0; 3.82 GiB total capacity; 2.19 GiB already allocated; 129.81 MiB free; 2.30 GiB reserved in total by PyTorch)

I can run the notebook on CPU but it is painfully slow and heats up the laptop pretty fast.

I’ve been using Colab the whole time but it is starting to crash in the middle of training, which just prolongs the process. I couldn’t validate anything today because of Colab crashing.

What can I do? I have a kickass laptop that I can’t even use properly. Is there a way to divide the workload between the GPU and my RAM?

1 Like