Trtexec를 이용해서 yolov5s.onnx를 tensorrt engine으로 변환할 때 변환에 실패합니다

안녕하세요?

최신 yolov5s 모델파일을 trtexec를 이용해서 변환하는 과정에서 다음과 같은 오류를 만났습니다.
이 오류를 어떻게 해결할 수 있을까요?

manager@manager-desktop:/usr/src/tensorrt/bin$ ./trtexec --onnx=yolov5s.onnx --saveEngine=yolov5s.engine --workspace=4096
&&&& RUNNING TensorRT.trtexec [TensorRT v8201] # ./trtexec --onnx=yolov5s.onnx --saveEngine=yolov5s.engine --workspace=4096
[07/18/2022-08:19:43] [I] === Model Options ===
[07/18/2022-08:19:43] [I] Format: ONNX
[07/18/2022-08:19:43] [I] Model: yolov5s.onnx
[07/18/2022-08:19:43] [I] Output:
[07/18/2022-08:19:43] [I] === Build Options ===
[07/18/2022-08:19:43] [I] Max batch: explicit batch
[07/18/2022-08:19:43] [I] Workspace: 4096 MiB
[07/18/2022-08:19:43] [I] minTiming: 1
[07/18/2022-08:19:43] [I] avgTiming: 8
[07/18/2022-08:19:43] [I] Precision: FP32
[07/18/2022-08:19:43] [I] Calibration: 
[07/18/2022-08:19:43] [I] Refit: Disabled
[07/18/2022-08:19:43] [I] Sparsity: Disabled
[07/18/2022-08:19:43] [I] Safe mode: Disabled
[07/18/2022-08:19:43] [I] DirectIO mode: Disabled
[07/18/2022-08:19:43] [I] Restricted mode: Disabled
[07/18/2022-08:19:43] [I] Save engine: yolov5s.engine
[07/18/2022-08:19:43] [I] Load engine: 
[07/18/2022-08:19:43] [I] Profiling verbosity: 0
[07/18/2022-08:19:43] [I] Tactic sources: Using default tactic sources
[07/18/2022-08:19:43] [I] timingCacheMode: local
[07/18/2022-08:19:43] [I] timingCacheFile: 
[07/18/2022-08:19:43] [I] Input(s)s format: fp32:CHW
[07/18/2022-08:19:43] [I] Output(s)s format: fp32:CHW
[07/18/2022-08:19:43] [I] Input build shapes: model
[07/18/2022-08:19:43] [I] Input calibration shapes: model
[07/18/2022-08:19:43] [I] === System Options ===
[07/18/2022-08:19:43] [I] Device: 0
[07/18/2022-08:19:43] [I] DLACore: 
[07/18/2022-08:19:43] [I] Plugins:
[07/18/2022-08:19:43] [I] === Inference Options ===
[07/18/2022-08:19:43] [I] Batch: Explicit
[07/18/2022-08:19:43] [I] Input inference shapes: model
[07/18/2022-08:19:43] [I] Iterations: 10
[07/18/2022-08:19:43] [I] Duration: 3s (+ 200ms warm up)
[07/18/2022-08:19:43] [I] Sleep time: 0ms
[07/18/2022-08:19:43] [I] Idle time: 0ms
[07/18/2022-08:19:43] [I] Streams: 1
[07/18/2022-08:19:43] [I] ExposeDMA: Disabled
[07/18/2022-08:19:43] [I] Data transfers: Enabled
[07/18/2022-08:19:43] [I] Spin-wait: Disabled
[07/18/2022-08:19:43] [I] Multithreading: Disabled
[07/18/2022-08:19:43] [I] CUDA Graph: Disabled
[07/18/2022-08:19:43] [I] Separate profiling: Disabled
[07/18/2022-08:19:43] [I] Time Deserialize: Disabled
[07/18/2022-08:19:43] [I] Time Refit: Disabled
[07/18/2022-08:19:43] [I] Skip inference: Disabled
[07/18/2022-08:19:43] [I] Inputs:
[07/18/2022-08:19:43] [I] === Reporting Options ===
[07/18/2022-08:19:43] [I] Verbose: Disabled
[07/18/2022-08:19:43] [I] Averages: 10 inferences
[07/18/2022-08:19:43] [I] Percentile: 99
[07/18/2022-08:19:43] [I] Dump refittable layers:Disabled
[07/18/2022-08:19:43] [I] Dump output: Disabled
[07/18/2022-08:19:43] [I] Profile: Disabled
[07/18/2022-08:19:43] [I] Export timing to JSON file: 
[07/18/2022-08:19:43] [I] Export output to JSON file: 
[07/18/2022-08:19:43] [I] Export profile to JSON file: 
[07/18/2022-08:19:43] [I] 
[07/18/2022-08:19:43] [I] === Device Information ===
[07/18/2022-08:19:43] [I] Selected Device: Xavier
[07/18/2022-08:19:43] [I] Compute Capability: 7.2
[07/18/2022-08:19:43] [I] SMs: 6
[07/18/2022-08:19:43] [I] Compute Clock Rate: 1.109 GHz
[07/18/2022-08:19:43] [I] Device Global Memory: 7765 MiB
[07/18/2022-08:19:43] [I] Shared Memory per SM: 96 KiB
[07/18/2022-08:19:43] [I] Memory Bus Width: 256 bits (ECC disabled)
[07/18/2022-08:19:43] [I] Memory Clock Rate: 1.109 GHz
[07/18/2022-08:19:43] [I] 
[07/18/2022-08:19:43] [I] TensorRT version: 8.2.1
[07/18/2022-08:19:44] [I] [TRT] [MemUsageChange] Init CUDA: CPU +362, GPU +0, now: CPU 381, GPU 4882 (MiB)
[07/18/2022-08:19:45] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 381 MiB, GPU 4910 MiB
[07/18/2022-08:19:45] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 486 MiB, GPU 5016 MiB
[07/18/2022-08:19:45] [I] Start parsing network model
[07/18/2022-08:19:46] [I] [TRT] ----------------------------------------------------------------
[07/18/2022-08:19:46] [I] [TRT] Input filename:   yolov5s.onnx
[07/18/2022-08:19:46] [I] [TRT] ONNX IR version:  0.0.7
[07/18/2022-08:19:46] [I] [TRT] Opset version:    12
[07/18/2022-08:19:46] [I] [TRT] Producer name:    pytorch
[07/18/2022-08:19:46] [I] [TRT] Producer version: 1.10
[07/18/2022-08:19:46] [I] [TRT] Domain:           
[07/18/2022-08:19:46] [I] [TRT] Model version:    0
[07/18/2022-08:19:46] [I] [TRT] Doc string:       
[07/18/2022-08:19:46] [I] [TRT] ----------------------------------------------------------------
[07/18/2022-08:19:46] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/18/2022-08:19:46] [I] Finish parsing network model
[07/18/2022-08:19:46] [I] [TRT] ---------- Layers Running on DLA ----------
[07/18/2022-08:19:46] [I] [TRT] ---------- Layers Running on GPU ----------
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_0
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_1), Mul_2)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_3
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_4), Mul_5)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_6 || Conv_16
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_7), Mul_8)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_9
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_10), Mul_11)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_12
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_13), Mul_14), Add_15)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_17), Mul_18)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_20
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_21), Mul_22)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_23
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_24), Mul_25)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_26 || Conv_43
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_27), Mul_28)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_29
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_30), Mul_31)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_32
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_33), Mul_34), Add_35)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_36
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_37), Mul_38)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_39
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_40), Mul_41), Add_42)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_44), Mul_45)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_47
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_48), Mul_49)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_50
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_51), Mul_52)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_53 || Conv_77
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_54), Mul_55)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_56
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_57), Mul_58)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_59
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_60), Mul_61), Add_62)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_63
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_64), Mul_65)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_66
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_67), Mul_68), Add_69)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_70
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_71), Mul_72)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_73
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_74), Mul_75), Add_76)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_78), Mul_79)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_81
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_82), Mul_83)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_84
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_85), Mul_86)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_87 || Conv_97
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_88), Mul_89)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_90
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_91), Mul_92)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_93
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_94), Mul_95), Add_96)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_98), Mul_99)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_101
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_102), Mul_103)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_104
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_105), Mul_106)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] MaxPool_107
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] MaxPool_108
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] MaxPool_109
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] 228 copy
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] 229 copy
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] 230 copy
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] 231 copy
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_111
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_112), Mul_113)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_114
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_115), Mul_116)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Resize_118
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] 243 copy
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_120 || Conv_129
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_121), Mul_122)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_123
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_124), Mul_125)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_126
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_127), Mul_128)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_130), Mul_131)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_133
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_134), Mul_135)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_136
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_137), Mul_138)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Resize_140
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] 268 copy
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_142 || Conv_151
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_143), Mul_144)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_145
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_146), Mul_147)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_148
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_149), Mul_150)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_152), Mul_153)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_155
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_156), Mul_157)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_158
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_159), Mul_160)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] 263 copy
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_162 || Conv_171
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_163), Mul_164)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_165
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_166), Mul_167)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_168
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_169), Mul_170)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_172), Mul_173)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_175
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_176), Mul_177)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_178
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_179), Mul_180)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] 238 copy
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_182 || Conv_191
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_183), Mul_184)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_185
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_186), Mul_187)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_188
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_189), Mul_190)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_192), Mul_193)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_195
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_196), Mul_197)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_198
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Reshape_199 + Transpose_200
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(Sigmoid_201)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Reshape_202
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_203
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Reshape_204 + Transpose_205
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(Sigmoid_206)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Reshape_207
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Conv_208
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Reshape_209 + Transpose_210
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] PWN(Sigmoid_211)
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] Reshape_212
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] 347 copy
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] 369 copy
[07/18/2022-08:19:46] [I] [TRT] [GpuLayer] 391 copy
[07/18/2022-08:19:47] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +226, GPU +201, now: CPU 744, GPU 5302 (MiB)
[07/18/2022-08:19:49] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +307, GPU +312, now: CPU 1051, GPU 5614 (MiB)
[07/18/2022-08:19:49] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.
[07/18/2022-08:31:28] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[07/18/2022-08:35:07] [I] [TRT] Detected 1 inputs and 4 output network tensors.
[07/18/2022-08:35:07] [I] [TRT] Total Host Persistent Memory: 142112
[07/18/2022-08:35:07] [I] [TRT] Total Device Persistent Memory: 45144576
[07/18/2022-08:35:07] [I] [TRT] Total Scratch Memory: 0
[07/18/2022-08:35:07] [I] [TRT] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 7 MiB, GPU 2644 MiB
[07/18/2022-08:35:07] [I] [TRT] [BlockAssignment] Algorithm ShiftNTopDown took 39.0571ms to assign 7 blocks to 116 nodes requiring 356352000 bytes.
[07/18/2022-08:35:07] [I] [TRT] Total Activation Memory: 356352000
[07/18/2022-08:35:07] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +5, now: CPU 1519, GPU 5818 (MiB)
[07/18/2022-08:35:07] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +1, now: CPU 1519, GPU 5819 (MiB)
[07/18/2022-08:35:07] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +3, GPU +64, now: CPU 3, GPU 64 (MiB)
[07/18/2022-08:35:07] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 1552, GPU 5843 (MiB)
[07/18/2022-08:35:07] [I] [TRT] Loaded engine size: 44 MiB
[07/18/2022-08:35:07] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1559, GPU 5843 (MiB)
[07/18/2022-08:35:07] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1559, GPU 5843 (MiB)
[07/18/2022-08:35:07] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +43, now: CPU 0, GPU 43 (MiB)
[07/18/2022-08:35:07] [E] Saving engine to file failed.
[07/18/2022-08:35:07] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8201] # ./trtexec --onnx=yolov5s.onnx --saveEngine=yolov5s.engine --workspace=4096

감사합니다.

Hi,

Based on your log, the file doesn’t have permission to write to the folder.
Could you try to save the output engine to other places?

For example:

$ ./trtexec --onnx=yolov5s.onnx --saveEngine=~/yolov5s.engine --workspace=4096

Then you can find the yolov5s.engine under /home/manager/.

Thanks.

1 Like