I have exported custom trained YOLO (.pt) model to tensorrt engine (.engine) using yolov5 export.py on my Jetson Orin Nano (8GB). Exporting utilize the major part of memory, so I can not do the same exportation on the Jetson Orin Nano 4GB, is there a way to use model for inference exported by Jetson Orin Nano (8GB) on the Jetson Orin Nano (4GB)?
Model Details
PyTorch: starting from custom.pt with output shape (32, 7056, 9) (659.0 MB) ONNX: 329 MB TensorRT: input “images” with shape(32, 3, 256, 448) DataType.FLOAT TensorRT: output “output0” with shape(32, 7056, 9) DataType.FLOAT TensorRT: building FP32 engine as custom.engine
[07/21/2023-11:44:22] [TRT] [W] Tactic Device request: 560MB Available: 328MB. Device memory is insufficient to use tactic.
[07/21/2023-11:44:22] [TRT] [W] Skipping tactic 25 due to insufficient memory on requested size of 560 detected for tactic 0x000000000000001d.
[07/21/2023-11:44:22] [TRT] [W] Tactic Device request: 560MB Available: 328MB. Device memory is insufficient to use tactic.
[07/21/2023-11:44:22] [TRT] [W] Skipping tactic 26 due to insufficient memory on requested size of 560 detected for tactic 0x000000000000001e.
[07/21/2023-11:44:23] [TRT] [E] 10: [optimizer.cpp::computeCosts::3728] Error Code 10: Internal Error (Could not find any implementation for node PWN(PWN(/model.0/act/Sigmoid), /model.0/act/Mul).)
TensorRT: export failure ❌ 256.3s: __enter__
Based on the error log, the TensorRT optimizer cannot find a suitable implementation within the limited memory resources.
It seems that you set the batch size to 32.
Would you mind lower the value and try it again?