Hi Team I have 2 doubts :
While training a model I am getting this error.
RuntimeError: CUDA out of memory. Tried to allocate 24.00 MiB (GPU 0; 3.95 GiB total capacity; 3.25 GiB already allocated; 22.06 MiB free; 37.90 MiB cached)
How can I check where is my 3.25 GB is allocated and how can I free it so that it’s available to my CUDA program dynamically.
I came across a forum while checking GPU memory management. I executed below command :
grep -i --color memory /var/log/Xorg.0.log
[ 44179.862] (–) NVIDIA(0): Memory: 4194304 kBytes
[ 44180.580] (II) NVIDIA: Using 24576.00 MB of virtual memory for indirect memory
[ 44180.625] (==) NVIDIA(0): Disabling shared memory pixmaps
How is the Virtual memory for indirect memory created ?
24576MB i.e. approx 24.576GB
I have 4 GB of RAM 1050 Ti.
How is the virtual memory allocated ?