Question about the memory usage of jeston nanoo

Hi! I use Orin Jetson Nano(8GB ram) to accelerate the inference of ResNet50 of PyTorch, I noticed that the used memory observed by Jtop is 2.5GB, while the GPU Shared RAM just only about 1.4GB. I'm confused as to why the used memory is much higher than  GPU Shared RAM. Do you know why this is the case?  Additionally, why does ResNet50 use so much memory?
Please refer to the attached Python file.

python code.txt (591 Bytes)

Information of enviroment:
Jetpack: 5.1.3
CUDA: 11.4
cuDNN: 8.6
torch:1.13

Hello,

Thanks for visiting the NVIDIA Developer forums! Your topic will be best served in the Jetson category.

I will move this post over for visibility.

Cheers,
Tom

Hi,

When inference, some memory is required to load the CUDA/cuDNN/TensorRT binary.
To reduce memory usage, please upgrade your software to JetPack 6 for a newer CUDA.

We introduce a new lazy loading feature from CUDA 11.8.
The feature only load the needed binary and be able to reduce the memory usage.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.