I found my gpu bar 1 memory usage could not released after I ran my pytorch code in VSCode with anaconda envrionment (pytorch 2.0.1; py3.10_cuda11.8_cudnn8_0 pytorch), GPU dirver version:545.84, Operation system : Windows 11.
I found that other users also had the same problem as me, but they could clear the bar1 memory through some instructions in the Linux system, and I wonder if there is any solution for Windows system.
Problem Link : GPU BAR1 memory not released when main process gets killed · Issue #9894 · Lightning-AI/lightning · GitHub
The methodology is the same as reported elsewhere. You have to kill any process that is using the GPU. You might be able to do this in task manager in windows. nvidia-smi
may give some clues about processes that are currently using the GPU, but I’m not suggesting it is guaranteed in all cases to give you a list of the processes that need to be cancelled/killed.
I’ve already tried Nvidia-smi
, it seems my Python processes have been killed yet. Bar1 memory is still occupied although I killed all processes shown by nvidia-smi. I could only see this memory usage by nvidia-smi -q -d Memory
. I wonder if there is any command that can reset my GPU to release the memory usage.
According to this, nvidia-smi --gpu-reset
works. Otherwise, as per the referenced thread, it looks like it could be an issue specific to a combination of factors.
For windows, this command is not available.
I just realised there’s a typo in the command I copied - it should be:
nvidia-smi --gpu-reset (no space between the two dashes).
Does that work?
I appreciate your guidance, but it appears that the command is still not compatible with the GPU of the Windows laptop.
How about:
nvidia-smi --gpu-reset -i 0
where “0” is the ID of the GPU, which I’m guessing is 0 on a laptop.
It returns
“ERROR: Option --gpu-reset is not recognized. Please run ‘nvidia-smi -h’.”
and I have not found the command about resetting the GPU.
For my laptop, I also tried buttons
Win+ Ctrl + Shift + B
to restart my GPU and still does not work.