Error in Yolov4 engine conversion,

Please see GitHub - NVIDIA-AI-IOT/deepstream_tao_apps at release/tlt3.0
$ wget https://nvidia.box.com/shared/static/i1cer4s3ox4v8svbfkuj5js8yqm3yazo.zip -O models.zip

Thanks @Morganh

Now it is converting

smarg@smarg-NX:~/Documents/Pritam/deepstream_tao_apps/models/yolov4$ ./tlt-converter -k nvidia_tlt -d 3,544,960 -e trt.fp16.engine -t fp16 -p Input,1x3x544x960,1x3x544x960,1x3x544x960 yolov4_resnet18.etlt
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[INFO] ModelImporter.cpp:135: No importer registered for op: BatchedNMSDynamic_TRT. Attempting to import as plugin.
[INFO] builtin_op_importers.cpp:3659: Searching for plugin: BatchedNMSDynamic_TRT, plugin_version: 1, plugin_namespace: 
[INFO] builtin_op_importers.cpp:3676: Successfully created plugin: BatchedNMSDynamic_TRT
[INFO] Detected input dimensions from the model: (-1, 3, 544, 960)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile opt shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile max shape: (1, 3, 544, 960) for input: Input
[INFO] 
[INFO] --------------- Layers running on DLA: 
[INFO] 
[INFO] --------------- Layers running on GPU: 
[INFO] conv1 + Relu16, block_1a_conv_1 + Relu15, block_1a_conv_2, block_1a_conv_shortcut + Add7 + Relu13, block_1b_conv_1 + Relu12, block_1b_conv_2, block_1b_conv_shortcut + Add5 + Relu9, block_2a_conv_1 + Relu8, block_2a_conv_2, block_2a_conv_shortcut + Add3 + Relu5, block_2b_conv_1 + Relu4, block_2b_conv_2, block_2b_conv_shortcut + Add1 + Relu1, block_3a_conv_1 + Relu7, block_3a_conv_2, block_3a_conv_shortcut + Add2 + Relu3, block_3b_conv_1 + Relu2, block_3b_conv_2, block_3b_conv_shortcut + Add + Relu, block_4a_conv_1 + Relu14, block_4a_conv_2, block_4a_conv_shortcut + Add6 + Relu11, block_4b_conv_1 + Relu10, block_4b_conv_2, block_4b_conv_shortcut + Add4 + Relu6, block_4b_relu/Relu:0_pooling_2, block_4b_relu/Relu:0_pooling_1, block_4b_relu/Relu:0_pooling, block_4b_relu/Relu:0 copy, yolo_spp_conv, yolo_spp_conv_lrelu, yolo_expand_conv1, yolo_expand_conv1_lrelu, yolo_conv1_1, yolo_conv1_1_lrelu, yolo_conv1_2, yolo_conv1_2_lrelu, yolo_conv1_3, yolo_conv1_3_lrelu, yolo_conv1_4, yolo_conv1_4_lrelu, yolo_conv1_5, yolo_conv1_5_lrelu, yolo_conv2, yolo_conv1_6, yolo_conv2_lrelu, yolo_conv1_6_lrelu, conv_big_object, Resize, upsample0/transpose_1:0 copy, block_3b_relu/Relu:0 copy, yolo_conv3_1, yolo_conv3_1_lrelu, yolo_conv3_2, yolo_conv3_2_lrelu, yolo_conv3_3, yolo_conv3_3_lrelu, yolo_conv3_4, yolo_conv3_4_lrelu, yolo_conv3_5, yolo_conv3_5_lrelu, yolo_conv4, yolo_conv3_6, yolo_conv4_lrelu, yolo_conv3_6_lrelu, conv_mid_object, Resize1, upsample1/transpose_1:0 copy, block_2b_relu/Relu:0 copy, yolo_conv5_1, yolo_conv5_1_lrelu, yolo_conv5_2, yolo_conv5_2_lrelu, yolo_conv5_3, yolo_conv5_3_lrelu, yolo_conv5_4, yolo_conv5_4_lrelu, yolo_conv5_5, yolo_conv5_5_lrelu, yolo_conv5_6, yolo_conv5_6_lrelu, conv_sm_object, (Unnamed Layer* 128) [Constant], bg_anchor/Identity:0_tile, bg_anchor/Reshape_reshape, Transpose + bg_reshape + bg_bbox_processor/Reshape_reshape, bg_bbox_processor/Reshape:0_cropping, bg_bbox_processor/Reshape:0_cropping_2, PWN(PWN(bg_bbox_processor/Sigmoid, (Unnamed Layer* 205) [Constant] + (Unnamed Layer* 206) [Shuffle] + bg_bbox_processor/mul), PWN((Unnamed Layer* 210) [Constant] + (Unnamed Layer* 211) [Shuffle], bg_bbox_processor/sub)), bg_bbox_processor/Reshape:0_cropping_1, PWN(PWN(PWN((Unnamed Layer* 201) [Constant] + (Unnamed Layer* 202) [Shuffle] + bg_bbox_processor/add, bg_bbox_processor/sub_1), bg_bbox_processor/Minimum_min), bg_bbox_processor/Exp), bg_bbox_processor/sub:0 copy, bg_bbox_processor/Exp:0 copy, bg_bbox_processor/Reshape:0_cropping0 copy, bg_bbox_processor/Reshape_1_reshape, bg_anchor/Reshape:0 copy, bg_bbox_processor/Reshape_1:0 copy, (Unnamed Layer* 251) [Constant], md_anchor/Identity:0_tile, md_anchor/Reshape_reshape, Transpose1 + md_reshape + md_bbox_processor/Reshape_reshape, md_bbox_processor/Reshape:0_cropping, md_bbox_processor/Reshape:0_cropping_2, PWN(PWN(md_bbox_processor/Sigmoid, (Unnamed Layer* 328) [Constant] + (Unnamed Layer* 329) [Shuffle] + md_bbox_processor/mul), PWN((Unnamed Layer* 333) [Constant] + (Unnamed Layer* 334) [Shuffle], md_bbox_processor/sub)), md_bbox_processor/Reshape:0_cropping_1, PWN(PWN(PWN((Unnamed Layer* 324) [Constant] + (Unnamed Layer* 325) [Shuffle] + md_bbox_processor/add, md_bbox_processor/sub_1), md_bbox_processor/Minimum_min), md_bbox_processor/Exp), md_bbox_processor/sub:0 copy, md_bbox_processor/Exp:0 copy, md_bbox_processor/Reshape:0_cropping0 copy, md_bbox_processor/Reshape_1_reshape, md_anchor/Reshape:0 copy, md_bbox_processor/Reshape_1:0 copy, (Unnamed Layer* 365) [Constant], sm_anchor/Identity:0_tile, sm_anchor/Reshape_reshape, Transpose2 + sm_reshape + sm_bbox_processor/Reshape_reshape, sm_bbox_processor/Reshape:0_cropping, sm_bbox_processor/Reshape:0_cropping_2, PWN(PWN(sm_bbox_processor/Sigmoid, (Unnamed Layer* 437) [Constant] + (Unnamed Layer* 438) [Shuffle] + sm_bbox_processor/mul), PWN((Unnamed Layer* 441) [Constant] + (Unnamed Layer* 442) [Shuffle], sm_bbox_processor/sub)), sm_bbox_processor/Reshape:0_cropping_1, PWN(PWN(PWN((Unnamed Layer* 434) [Constant] + (Unnamed Layer* 435) [Shuffle] + sm_bbox_processor/add, sm_bbox_processor/sub_1), sm_bbox_processor/Minimum_min), sm_bbox_processor/Exp), sm_bbox_processor/sub:0 copy, sm_bbox_processor/Exp:0 copy, sm_bbox_processor/Reshape:0_cropping0 copy, sm_bbox_processor/Reshape_1_reshape, sm_anchor/Reshape:0 copy, sm_bbox_processor/Reshape_1:0 copy, encoded_bg/concat:0 copy, encoded_md/concat:0 copy, encoded_sm/concat:0 copy, cls/Reshape_reshape, cls/Reshape:0_cropping, cls/Reshape:0_cropping_1, PWN(cls/Sigmoid_1, PWN(cls/Sigmoid, cls/mul)), box/Reshape_reshape, box/Reshape:0_cropping, box/Reshape:0_cropping_2, box/Reshape:0_cropping_3, PWN(box/mul_1, box/add_1), box/Reshape:0_cropping_4, box/Reshape:0_cropping_5, PWN(box/mul_3, (Unnamed Layer* 698) [Constant] + (Unnamed Layer* 699) [Shuffle] + box/mul_4), box/sub, box/Reshape:0_cropping_1, box/Reshape:0_cropping_6, box/Reshape:0_cropping_7, PWN(box/mul, box/add), box/Reshape:0_cropping_8, box/Reshape:0_cropping_9, PWN(box/mul_2, (Unnamed Layer* 702) [Constant] + (Unnamed Layer* 703) [Shuffle] + box/mul_6), box/sub_1, box/add_2, box/add_3, box/sub:0 copy, box/sub_1:0 copy, box/add_2:0 copy, box/add_3:0 copy, BatchedNMS_N, 
[INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.


Great~

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.