Pytorch in cuda mode use huge memory

I have run small model with pytorch cuda which parameters size is only 0.5 million.
It malloc 2G+ memory ,it is too large to run other applications.
What is the way to limit pytorch cuda max memory?
Thanks!

Hi,

Please check if the following API can meet your requirement:

https://pytorch.org/docs/1.9.0/generated/torch.cuda.max_memory_allocated.html

Thanks.

Also, please refer to this post: https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-9-0-now-available/72048/843

this doesn’t seem specific to Jetson, as using CUDA in PyTorch also uses extra memory on PC/x86. I believe it is loading compiled CUDA kernel code binaries and libraries like cuDNN. If you have swap mounted, it seems that much of it can be swapped out in my experience.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.