Cannot export Nvidia Retinanet to tensorrt

Hello.

I am using a Jetson Nano with Jetpack 4.4, flashed using the Nvidia documentation (Getting Started With Jetson Nano Developer Kit | NVIDIA Developer)

I am trying to use the Nvidia Retinanet repository GitHub - NVIDIA/retinanet-examples: Fast and accurate object detection with end-to-end GPU optimization and when using the code to export my ONNX file to TensorRT, I simply get a segmentation fault error; no other output.

The host machine and the target machine use the same version of TensorRT (7.1.3), the same version of CUDA (10.2) and can both handle opset 11 of ONNX.

For information, I already used this repo on a TX2 and a Xavier, both working on Jetpack 4.4, and never encountered this issue before.

Any help on this matter would be greatly appreciated. Thanks.

Hi,

Not sure if this issue comes from the memory limitation of Nano.
Would you mind to monitor the memory usage at the same time with tegrastats?

$ sudo tegrastats

Thanks.

It was indeed a memory limitation. I stopped all applications on the Nano and ran the export program again without issue.