Calling cuda() consumes all the RAM memory

Hi guys,

I observed a very strange phenomenon on my jetson nano board.
Sending data to the GPU by calling .cuda() consumes all memory, no matter how small the amount of data sent is.

Jetpack is 4.5.1, pytorch is 1.8/1.7. I have added an example video to the attachment.
I am new to Jetson, how can I solve this issue?

Thanks!

Hi,

Could you try to set the maximal memory fraction to see if it helps?

https://pytorch.org/docs/stable/generated/torch.cuda.set_per_process_memory_fraction.html#torch.cuda.set_per_process_memory_fraction

Thanks.

Hi,

Thanks for your reply!
On my device, calling torch.cuda.set_per_process_memory_fraction(fraction, device=None) also causes the same memory overrun.

Hi,

We test this on our Nano with JetPack 4.5.1 and PyTorch v1.8.0 installed from this topic.

RAM 1052/3964MB (lfb 91x4MB) ...
RAM 1052/3964MB (lfb 91x4MB) ...
RAM 1052/3964MB (lfb 91x4MB) ...
RAM 1130/3964MB (lfb 91x4MB) ...
RAM 1240/3964MB (lfb 91x4MB) ...
RAM 1376/3964MB (lfb 91x4MB) ...
RAM 1513/3964MB (lfb 91x4MB) ...
RAM 1665/3964MB (lfb 88x4MB) ...
RAM 1821/3964MB (lfb 80x4MB) ...
RAM 1999/3964MB (lfb 74x4MB) ...
RAM 2146/3964MB (lfb 36x4MB) ...
RAM 2315/3964MB (lfb 55x1MB) ...
RAM 2386/3964MB (lfb 15x1MB) ...
RAM 2386/3964MB (lfb 15x1MB) ...
RAM 2386/3964MB (lfb 15x1MB) ...

The memory increases around 1GiB in our experiment.
Is this similar to your observation?

Usually, the underlying library is loaded when being used.
To create a GPU buffer may trigger some requirements of the CUDA-related library, ex cuDNN.

Thanks.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.