CUDA out of memory

When I load the model which is 390+MB to my GTX 3060 GPU using the following code

model = model.to(device)

, I got the following error message

RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 5.81 GiB total capacity; 393.40 MiB already allocated; 3.81 MiB free; 428.00 MiB reserved in total by PyTorch)

According to the message, I have almost 6GB memory and I only used less than 1 GB memory. How could I get out of memory error message?

Thank you.

something else is using memory on that GPU.