Inference OD Memory Consumption

Hi all,

I have a question regarding memory consumption. I have the 2GB Jetson Nano that seems to have all of its RAM consumed by the SSD detection network; so much so, that swap space is used.

What are some recommendations for decreasing the amount of memory used?


Since Nano 2GB has extremely limited resources, it’s more recommended to try some lightweight model.

Please note that swap memory can only be used by the CPU.
It won’t increase the available memory if the inference runs on the GPU.


@AastaLLL Thanks for getting back to me. I am using ssd-mobilenet-v2 network–I followed the Jetson Inference re-train SSD network tutorial. What network is better for the 2GB version? I am running it using TensorRT.

Also, the model I am using is quite small, about 25MB onnx file.


25MB is the file size.
But running inference requires some memory for loading library and workspace (ex. intermediate data).


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.