Insufficient memory when exporting Yolo model from PyTorch to TensorRT Jetson Xavier Xavier NX

Hello,

I am trying to use a modified version of the Yolo V5 network on the Jetson Xavier NX with Jetpack 5.0.2. I am trying to export the model to TensorRT with the following command:

python3 export.py --weights m6_1500_SL1_001.pt --imgsz 1280 768 --include engine --device 0

OUTPUT:

export: data=data/coco128.yaml, weights=['m6_1500_SL1_001.pt'], imgsz=[1280, 768], batch_size=1, device=0, half=False, inplace=False, train=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=14, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['engine']
YOLOv5 🚀 c981b40 torch 1.12.0a0+2c916ef.nv22.3 CUDA:0 (Xavier, 7513MiB)

Fusing layers... 
Model Summary: 378 layers, 35260464 parameters, 0 gradients, 48.9 GFLOPs

PyTorch: starting from m6_1500_SL1_001.pt (283.6 MB)

ONNX: starting export with onnx 1.13.0...
/home/pratum/yolo_v5_keypoint/models/yolo.py:63: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
ONNX: export failure: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument other in method wrapper__equal)

TensorRT: starting export with TensorRT 8.4.1.5...
[01/18/2023-10:06:21] [TRT] [I] [MemUsageChange] Init CUDA: CPU +181, GPU +0, now: CPU 1363, GPU 4084 (MiB)
[01/18/2023-10:06:24] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +131, GPU +171, now: CPU 1513, GPU 4250 (MiB)
export.py:286: DeprecationWarning: Use set_memory_pool_limit instead.
  config.max_workspace_size = workspace * 1 << 30
[01/18/2023-10:06:25] [TRT] [I] ----------------------------------------------------------------
[01/18/2023-10:06:25] [TRT] [I] Input filename:   m6_1500_SL1_001.onnx
[01/18/2023-10:06:25] [TRT] [I] ONNX IR version:  0.0.7
[01/18/2023-10:06:25] [TRT] [I] Opset version:    13
[01/18/2023-10:06:25] [TRT] [I] Producer name:    pytorch
[01/18/2023-10:06:25] [TRT] [I] Producer version: 1.12.0
[01/18/2023-10:06:25] [TRT] [I] Domain:           
[01/18/2023-10:06:25] [TRT] [I] Model version:    0
[01/18/2023-10:06:25] [TRT] [I] Doc string:       
[01/18/2023-10:06:25] [TRT] [I] ----------------------------------------------------------------
[01/18/2023-10:06:26] [TRT] [W] onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/18/2023-10:06:26] [TRT] [W] onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
[01/18/2023-10:06:26] [TRT] [W] onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
[01/18/2023-10:06:26] [TRT] [W] onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
[01/18/2023-10:06:26] [TRT] [W] onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
TensorRT: Network Description:
TensorRT:	input "images" with shape (1, 3, 64, 64) and dtype DataType.FLOAT
TensorRT:	output "output" with shape (1, 255, 8) and dtype DataType.FLOAT
TensorRT:	output "onnx::Sigmoid_601" with shape (1, 3, 8, 8, 8) and dtype DataType.FLOAT
TensorRT:	output "onnx::Sigmoid_685" with shape (1, 3, 4, 4, 8) and dtype DataType.FLOAT
TensorRT:	output "onnx::Sigmoid_769" with shape (1, 3, 2, 2, 8) and dtype DataType.FLOAT
TensorRT:	output "onnx::Sigmoid_853" with shape (1, 3, 1, 1, 8) and dtype DataType.FLOAT
TensorRT: building FP32 engine in m6_1500_SL1_001.engine
export.py:306: DeprecationWarning: Use build_serialized_network instead.
  with builder.build_engine(network, config) as engine, open(f, 'wb') as t:
[01/18/2023-10:06:27] [TRT] [I] ---------- Layers Running on DLA ----------
[01/18/2023-10:06:27] [TRT] [I] ---------- Layers Running on GPU ----------
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONSTANT: onnx::Add_612
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONSTANT: onnx::Mul_625
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONSTANT: onnx::Add_636
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONSTANT: onnx::Add_696
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONSTANT: onnx::Mul_709
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONSTANT: onnx::Add_720
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONSTANT: onnx::Add_780
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONSTANT: onnx::Mul_793
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONSTANT: onnx::Add_804
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONSTANT: onnx::Add_864
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONSTANT: onnx::Mul_877
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONSTANT: onnx::Add_888
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_15
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_16), Mul_17)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_18
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_19), Mul_20)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_21 || Conv_38
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_22), Mul_23)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_39), Mul_40)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_24
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_25), Mul_26)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_27
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_28), Mul_29), Add_30)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_31
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_32), Mul_33)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_34
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_35), Mul_36), Add_37)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_238 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_42
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_43), Mul_44)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_45
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_46), Mul_47)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_48 || Conv_79
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_49), Mul_50)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_80), Mul_81)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_51
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_52), Mul_53)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_54
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_55), Mul_56), Add_57)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_58
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_59), Mul_60)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_61
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_62), Mul_63), Add_64)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_65
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_66), Mul_67)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_68
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_69), Mul_70), Add_71)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_72
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_73), Mul_74)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_75
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_76), Mul_77), Add_78)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_83
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_84), Mul_85)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_86
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_87), Mul_88)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_89 || Conv_134
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_90), Mul_91)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_135), Mul_136)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_92
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_93), Mul_94)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_95
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_96), Mul_97), Add_98)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_99
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_100), Mul_101)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_102
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_103), Mul_104), Add_105)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_106
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_107), Mul_108)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_109
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_110), Mul_111), Add_112)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_113
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_114), Mul_115)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_116
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_117), Mul_118), Add_119)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_120
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_121), Mul_122)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_123
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_124), Mul_125), Add_126)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_127
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_128), Mul_129)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_130
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_131), Mul_132), Add_133)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_138
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_139), Mul_140)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_141
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_142), Mul_143)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_144 || Conv_161
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_145), Mul_146)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_162), Mul_163)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_147
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_148), Mul_149)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_150
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_151), Mul_152), Add_153)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_154
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_155), Mul_156)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_157
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_158), Mul_159), Add_160)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_165
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_166), Mul_167)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_168
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_169), Mul_170)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_171 || Conv_188
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_172), Mul_173)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_189), Mul_190)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_174
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_175), Mul_176)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_177
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_178), Mul_179), Add_180)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_181
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_182), Mul_183)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_184
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(Sigmoid_185), Mul_186), Add_187)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_192
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_193), Mul_194)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_195
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_196), Mul_197)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POOLING: MaxPool_198
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POOLING: MaxPool_199
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POOLING: MaxPool_200
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::MaxPool_398 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::MaxPool_399 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::MaxPool_400 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_202
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_203), Mul_204)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_205
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_206), Mul_207)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] RESIZE: Resize_208
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_413 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_210 || Conv_225
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_211), Mul_212)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_226), Mul_227)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_213
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_214), Mul_215)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_216
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_217), Mul_218)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_219
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_220), Mul_221)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_222
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_223), Mul_224)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_229
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_230), Mul_231)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_232
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_233), Mul_234)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] RESIZE: Resize_235
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_444 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_237 || Conv_252
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_238), Mul_239)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_253), Mul_254)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_240
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_241), Mul_242)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_243
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_244), Mul_245)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_246
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_247), Mul_248)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_249
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_250), Mul_251)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_256
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_257), Mul_258)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_259
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_260), Mul_261)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] RESIZE: Resize_262
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_475 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_264 || Conv_279
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_265), Mul_266)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_280), Mul_281)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_267
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_268), Mul_269)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_270
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_271), Mul_272)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_273
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_274), Mul_275)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_276
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_277), Mul_278)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_283
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_284), Mul_285)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_286
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_364
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_287), Mul_288)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: input.264 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_290 || Conv_305
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SHUFFLE: Reshape_381 + Transpose_382
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_291), Mul_292)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_306), Mul_307)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_293
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(Sigmoid_383)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_388
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_401
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_412
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_425
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_430
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_294), Mul_295)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_296
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(onnx::Mul_621 + (Unnamed Layer* 409) [Shuffle] + Mul_403, PWN(onnx::Pow_623 + (Unnamed Layer* 412) [Shuffle], Pow_405)), Mul_407)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_297), Mul_298)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(onnx::Mul_608 + (Unnamed Layer* 397) [Shuffle] + Mul_390, PWN(onnx::Sub_610 + (Unnamed Layer* 400) [Shuffle], Sub_392)), Add_394), onnx::Mul_614 + (Unnamed Layer* 405) [Shuffle] + Mul_396)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(onnx::Mul_632 + (Unnamed Layer* 418) [Shuffle] + Mul_414, PWN(onnx::Sub_634 + (Unnamed Layer* 421) [Shuffle], Sub_416)), Add_418), onnx::Mul_638 + (Unnamed Layer* 426) [Shuffle] + Mul_420)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_299
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_615 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_626 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_639 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SHUFFLE: Reshape_435
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: Reshape_435_copy_output
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_300), Mul_301)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_302
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_303), Mul_304)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_309
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_310), Mul_311)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_312
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_436
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_313), Mul_314)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: input.232 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_316 || Conv_331
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SHUFFLE: Reshape_453 + Transpose_454
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_317), Mul_318)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_332), Mul_333)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_319
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(Sigmoid_455)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_460
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_473
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_484
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_497
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_502
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_320), Mul_321)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_322
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(onnx::Mul_705 + (Unnamed Layer* 465) [Shuffle] + Mul_475, PWN(onnx::Pow_707 + (Unnamed Layer* 468) [Shuffle], Pow_477)), Mul_479)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_323), Mul_324)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(onnx::Mul_692 + (Unnamed Layer* 453) [Shuffle] + Mul_462, PWN(onnx::Sub_694 + (Unnamed Layer* 456) [Shuffle], Sub_464)), Add_466), onnx::Mul_698 + (Unnamed Layer* 461) [Shuffle] + Mul_468)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(onnx::Mul_716 + (Unnamed Layer* 474) [Shuffle] + Mul_486, PWN(onnx::Sub_718 + (Unnamed Layer* 477) [Shuffle], Sub_488)), Add_490), onnx::Mul_722 + (Unnamed Layer* 482) [Shuffle] + Mul_492)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_325
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_699 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_710 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_723 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SHUFFLE: Reshape_507
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: Reshape_507_copy_output
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_326), Mul_327)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_328
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_329), Mul_330)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_335
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_336), Mul_337)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_338
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_508
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_339), Mul_340)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: input.200 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_342 || Conv_357
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SHUFFLE: Reshape_525 + Transpose_526
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_343), Mul_344)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_358), Mul_359)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_345
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(Sigmoid_527)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_532
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_545
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_556
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_569
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_574
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_346), Mul_347)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_348
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(onnx::Mul_789 + (Unnamed Layer* 521) [Shuffle] + Mul_547, PWN(onnx::Pow_791 + (Unnamed Layer* 524) [Shuffle], Pow_549)), Mul_551)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_349), Mul_350)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(onnx::Mul_776 + (Unnamed Layer* 509) [Shuffle] + Mul_534, PWN(onnx::Sub_778 + (Unnamed Layer* 512) [Shuffle], Sub_536)), Add_538), onnx::Mul_782 + (Unnamed Layer* 517) [Shuffle] + Mul_540)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(onnx::Mul_800 + (Unnamed Layer* 530) [Shuffle] + Mul_558, PWN(onnx::Sub_802 + (Unnamed Layer* 533) [Shuffle], Sub_560)), Add_562), onnx::Mul_806 + (Unnamed Layer* 538) [Shuffle] + Mul_564)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_351
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_783 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_794 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_807 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SHUFFLE: Reshape_579
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: Reshape_579_copy_output
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_352), Mul_353)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_354
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_355), Mul_356)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_361
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(Sigmoid_362), Mul_363)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] CONVOLUTION: Conv_580
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SHUFFLE: Reshape_597 + Transpose_598
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(Sigmoid_599)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_604
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_617
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_628
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_641
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SLICE: Slice_646
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(onnx::Mul_873 + (Unnamed Layer* 577) [Shuffle] + Mul_619, PWN(onnx::Pow_875 + (Unnamed Layer* 580) [Shuffle], Pow_621)), Mul_623)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(onnx::Mul_860 + (Unnamed Layer* 565) [Shuffle] + Mul_606, PWN(onnx::Sub_862 + (Unnamed Layer* 568) [Shuffle], Sub_608)), Add_610), onnx::Mul_866 + (Unnamed Layer* 573) [Shuffle] + Mul_612)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] POINTWISE: PWN(PWN(PWN(onnx::Mul_884 + (Unnamed Layer* 586) [Shuffle] + Mul_630, PWN(onnx::Sub_886 + (Unnamed Layer* 589) [Shuffle], Sub_632)), Add_634), onnx::Mul_890 + (Unnamed Layer* 594) [Shuffle] + Mul_636)
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_867 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_878 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: onnx::Concat_891 copy
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] SHUFFLE: Reshape_651
[01/18/2023-10:06:27] [TRT] [I] [GpuLayer] COPY: Reshape_651_copy_output
[01/18/2023-10:06:27] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +7, now: CPU 1665, GPU 4481 (MiB)
[01/18/2023-10:06:27] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 1665, GPU 4489 (MiB)
[01/18/2023-10:06:27] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[01/18/2023-10:08:42] [TRT] [W] Tactic Device request: 2599MB Available: 2242MB. Device memory is insufficient to use tactic.
[01/18/2023-10:08:42] [TRT] [W] Skipping tactic 3 due to insufficient memory on requested size of 2599 detected for tactic 0x0000000000000004.
[01/18/2023-10:08:42] [TRT] [W] Tactic Device request: 2599MB Available: 2242MB. Device memory is insufficient to use tactic.
[01/18/2023-10:08:42] [TRT] [W] Skipping tactic 8 due to insufficient memory on requested size of 2599 detected for tactic 0x000000000000003c.
[01/18/2023-10:12:17] [TRT] [I] Detected 1 inputs and 9 output network tensors.
[01/18/2023-10:12:18] [TRT] [I] Total Host Persistent Memory: 218496
[01/18/2023-10:12:18] [TRT] [I] Total Device Persistent Memory: 14336
[01/18/2023-10:12:18] [TRT] [I] Total Scratch Memory: 55296
[01/18/2023-10:12:18] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 29 MiB, GPU 1953 MiB
[01/18/2023-10:12:18] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 193.957ms to assign 11 blocks to 260 nodes requiring 517124 bytes.
[01/18/2023-10:12:18] [TRT] [I] Total Activation Memory: 517124
[01/18/2023-10:12:18] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +1, GPU +0, now: CPU 1691, GPU 5323 (MiB)
[01/18/2023-10:12:18] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +13, GPU +256, now: CPU 13, GPU 256 (MiB)
[01/18/2023-10:12:18] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[01/18/2023-10:12:18] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
TensorRT: export success, saved as m6_1500_SL1_001.engine (161.9 MB)

Export complete (387.20s)
Results saved to /home/pratum/yolo_v5_keypoint
Visualize with https://netron.app

It says that the process finished successfully. However, when I try to detect with the exported model it tells me that the input must be 64 x 64.

COMMAND:

python3 detect.py --img 1280 768 --source /home/pratum/diego_dataset_06_12_2022/images/test  --weights m6_1500_SL1_001.engine --conf 0.1 --save-txt

OUTPUT:

detect: weights=['m6_1500_SL1_001.engine'], source=/home/pratum/diego_dataset_06_12_2022/images/test, imgsz=[1280, 768], conf_thres=0.1, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=True, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False
YOLOv5 🚀 c981b40 torch 1.12.0a0+2c916ef.nv22.3 CUDA:0 (Xavier, 7513MiB)

Loading m6_1500_SL1_001.engine for TensorRT inference...
[01/18/2023-10:16:20] [TRT] [I] [MemUsageChange] Init CUDA: CPU +186, GPU +0, now: CPU 277, GPU 2392 (MiB)
[01/18/2023-10:16:20] [TRT] [I] Loaded engine size: 154 MiB
[01/18/2023-10:16:22] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +346, GPU +330, now: CPU 813, GPU 2900 (MiB)
[01/18/2023-10:16:22] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +151, now: CPU 0, GPU 151 (MiB)
[01/18/2023-10:16:24] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1132, GPU 3199 (MiB)
[01/18/2023-10:16:24] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +1, now: CPU 0, GPU 152 (MiB)
Traceback (most recent call last):
  File "detect.py", line 257, in <module>
    main(opt)
  File "detect.py", line 252, in main
    run(**vars(opt))
  File "/home/pratum/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "detect.py", line 101, in run
    model.warmup(imgsz=(1, 3, *imgsz), half=half)  # warmup
  File "/home/pratum/yolo_v5_keypoint/models/common.py", line 435, in warmup
    self.forward(im)  # warmup
  File "/home/pratum/yolo_v5_keypoint/models/common.py", line 401, in forward
    assert im.shape == self.bindings['images'].shape, (im.shape, self.bindings['images'].shape)
AssertionError: (torch.Size([1, 3, 1280, 768]), (1, 3, 64, 64))

Any ideas on how to solve this? Any way to cross compliling the model?

Thank you very much.

Package: nvidia-jetpack
Version: 5.0.2-b231
Priority: standard
Section: metapackages
Maintainer: NVIDIA Corporation
Installed-Size: 199 kB
Depends: nvidia-jetpack-runtime (= 5.0.2-b231), nvidia-jetpack-dev (= 5.0.2-b231)
Homepage: http://developer.nvidia.com/jetson
Download-Size: 29,3 kB
APT-Sources: https://repo.download.nvidia.com/jetson/common r35.1/main arm64 Packages
Description: NVIDIA Jetpack Meta Package

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

This issue is related to the custom implementation of YOLO.
Have you checked with the sample author?

Or is it possible to resize the input image to 64x64?

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.