Export tensorrt with export.py yolov5 (Jetson Nano)

Hello,
I’m trying to export the basic yolov5s.pt model to yolov5s.engine model using export.py from the github GitHub - ultralytics/yolov5: YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite on my jetson nano 4Gb.

When running ‘python3 export.py --weights yolov5s.pt --include engine --device 0’ i’m having this error messages :

Fusing layers…
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients

PyTorch: starting from yolov5s.pt with output shape (1, 25200, 85) (14.1 MB)
Python 3.7.0 required by YOLOv5, but Python 3.6.9 is currently installed

ONNX: starting export with onnx 1.11.0…
ONNX: export success, saved as yolov5s.onnx (27.8 MB)

TensorRT: starting export with TensorRT 7.1.3.0…
[TensorRT] WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
TensorRT: Network Description:
TensorRT: input “images” with shape (1, 3, 640, 640) and dtype DataType.FLOAT
TensorRT: output “output” with shape (-1, -1, -1) and dtype DataType.FLOAT
TensorRT: building FP32 engine in yolov5s.engine
[TensorRT] ERROR: …/rtSafe/safeRuntime.cpp (25) - Cuda Error in allocate: 2 (out of memory)
[TensorRT] ERROR: …/rtSafe/safeRuntime.cpp (25) - Cuda Error in allocate: 2 (out of memory)

TensorRT: export failure: enter

I tried increasing swap memory (currently at 6Gb of swap but not really usefull)
As you can see ONNX model is well exported…
Thanks

Hi I have the same error, have you managed to fix it?

Hi, Same error for me…
Did anyone solved it?