Jetson TX2 doesn't seem to be using allocated swap space when running inference

I am trying to run the Tensorflow Object detection api. I gave the Jetson TX2 8Gb of extra swap space, and I have confirmed the swap space has been enabled ( I can see it in system monitor). When I run the model for inference, I am watching the RAM usage. The RAM gets fully used but the swap space doesn’t get touched. Then the program throws an error saying “too many resources requested for launch”

How am I running out of memory if the system monitor doesn’t show the Jetson using up swap space memory?

Some memory cannot be swapped out, e.g., some CUDA-related memory must be contiguous physical RAM. It seems likely that although swap would help take pressure from ordinary programs using RAM and put them in swap, that your CUDA-related memory must be physical RAM. I would guess if you ran other RAM-intensive applications and watched as your program runs that the other programs would begin to swap out…just not the programs requiring physical RAM.

Hi,

As linuxdev said, swap memory can’t be used for GPU related memory, ex cudaMalloc, cudaMallocHost, cudaMallocManaged.

Another information is that this object detection API is CPU only(some layers), this may yield poor performance on Jetson TX2.
[url]Very slow Postprocessing in Object Detection API · Issue #2710 · tensorflow/models · GitHub

Thanks.

did you manage to solve this?

Swap space will slightly reduce what other processes use for physical RAM in competition with the GPU…but there is no substitute for physical RAM for those operation. One would have to change the program to not use as much RAM and perhaps compute for smaller kernels.

Sorry for the late response. I was only able to solve this by moving to a model that uses less memory. I used SSD with mobilenets as opposed to Faster R-CNN and Inceptionv2-resnet.