Jeston Nano 2GB Out of Memory With ONNX->TensorRT Conversion

I’m trying to get YOLOv3 and TensorRT working on the Jetson Nano 2GB, following the guide here:

However, at the step where you’re supposed to convert the ONNX model into a TensorRT plan, the process always gets killed. Specifically, this command always runs out of memory and is killed by the OOM-killer:

python3 onnx_to_tensorrt.py -m yolov3-tiny-416

Things I’ve tried:

  1. Killing off anything else that’s using significant amounts of memory, including the X server (including lightdm/gdm3), SSH server, NetworkManager, and containerd.
  2. Increasing the size of the swap file from 4GB up to 12GB, for a total of 14GB of RAM.

I have a 64GB SD card arriving soon, but I’m really surprised that 14GB is not enough – is this expected?

Hi,

Please noted that swap memory is not GPU accessible memory.
So the real GPU memory will remain 2GB.

We don’t test the source you shared above.
But YOLOv3 Tiny is one of our benchmark models, and we can get 49 fps on Tiny YOLO V3(416x416) + Nano 2GB.
Below is our benchmarks source for your reference:

Thanks.