I’m making an object detection tool using TensorFlow and a Jetson Nano. I have trained a R-FCN Resnet101 model on a CPU and was trying to do inference on a Jetson Nano. The inference uses about 4 GB of memory and my Nano has 3 GB of free memory, so when I run inference, the process starts with no error and after a while the board freezes and the process is killed (also the OS shows a message informing that the memory available is too low).
I’d like to know:
Is there a way of increasing the memory available for inference?
Does the swapfile works for inference?
Does TensorRT decreases the memory consumption for inference?