I’m trying to get YOLOv3 and TensorRT working on the Jetson Nano 2GB, following the guide here:
However, at the step where you’re supposed to convert the ONNX model into a TensorRT plan, the process always gets killed. Specifically, this command always runs out of memory and is killed by the OOM-killer:
python3 onnx_to_tensorrt.py -m yolov3-tiny-416
Things I’ve tried:
- Killing off anything else that’s using significant amounts of memory, including the X server (including lightdm/gdm3), SSH server, NetworkManager, and containerd.
- Increasing the size of the swap file from 4GB up to 12GB, for a total of 14GB of RAM.
I have a 64GB SD card arriving soon, but I’m really surprised that 14GB is not enough – is this expected?