Converting Yolov4 weights to onnx using pytorch runs out of memory

Using Jetson nano, jetpack 4.4. onnx 1.4. Tensorrt 7.1.3, cuda 10.2.

I am trying to convert the yolov4 weights to onnx to convert them later to tensorrt. The problem is that the kernel kills my process because it uses (runs out) of memory.

Below is the command I used.

python3 demo_darknet2onnx.py yolov4.cfg yolov4.weights ./data/giraffe.jpg 1

Has anyone generated onnx models using (GitHub - Tianxiaomo/pytorch-YOLOv4: PyTorch ,ONNX and TensorRT implementation of YOLOv4).

My ain is to do darknet → onnx → tensorrt so I can use the engine in deepstreamv5.

Does jetson nano have the compute power to generate the models. I tried to develop it on a different system but that didn’t work because the models should be generated on the platform that uses them.

I increased the swap size and it successfully compiled. Just for reference, used this script GitHub - JetsonHacksNano/installSwapfile: Install a swap file on the NVIDIA Jetson Nano Developer Kit. This should help with memory pressure issues.

Glad to know issue resolved, thanks for the sharing!