Error in Yolov4 engine conversion,

But with the command

$ sudo ln -s libnvinfer_plugin.so.7.1.3 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so
$ sudo ln -s libnvinfer_plugin.so.7.1.3 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7

I was getting

ln: failed to create symbolic link '/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so': File exists
ln: failed to create symbolic link '/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7': File exists

Just run
$ sudo rm /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so
$ sudo rm /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7

then,
$ sudo ln -s libnvinfer_plugin.so.7.1.3 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so
$ sudo ln -s libnvinfer_plugin.so.7.1.3 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7

Okay Now getting.

smarg@smarg-NX:/usr/lib/aarch64-linux-gnu$ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*
lrwxrwxrwx 1 root root       26 Oct 12 14:50 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.1.3*
lrwxrwxrwx 1 root root       26 Oct 12 14:50 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.1.3*
lrwxrwxrwx 1 root root       26 Oct 12 14:30 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.0.0 -> libnvinfer_plugin.so.7.1.3*
-rwxr-xr-x 1 root root 11028968 Oct 12 14:36 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3*
smarg@smarg-NX:/usr/lib/aarch64-linux-gnu$ 

Now what should I do next ?
Still the same issue :

Sometime error msg invisible and some time visible.

Can you try to rebuild TRT OSS plugin?
On my side, the size is 10009144 .
-rwxr-xr-x 1 root root 10009144 10月 12 15:06 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3*

Yes Retrying.

Hi @morganh
I rebuild the TRT OSS plugin but still same issue.

Steps I followed :



    $ git clone -b 21.03 https://github.com/nvidia/TensorRT
    $ cd TensorRT/
    $ git submodule update --init --recursive
    $ export TRT_SOURCE=pwd
    $ cd $TRT_SOURCE
    $ mkdir -p build && cd build
    $ /usr/local/bin/cmake … -DGPU_ARCHS=72 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out

    $ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*
    Previously My NX is using libnvinfer_plugin.so.7.1.3
    $ sudo cp libnvinfer_plugin.so.7.2.2 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3
    $ sudo ldconfig

    $ ./tlt-converter -k nvidia_tlt -d 3,544,960 -e trt.fp16.engine -t fp16 -p Input,1x3x544x960,1x3x544x960,1x3x544x960 yolov4_resnet18.etlt

Issue :

Can you try my plugin which is built in NX ?libnvinfer_plugin.so.7.2.2 (9.5 MB)

Sure.
I have to run only two commands

    $ sudo cp libnvinfer_plugin.so.7.2.2 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3
    $ sudo ldconfig

At the place of libnvinfer_plugin.so.7.2.2 in the above command will be your plugin ?

Can you share your yolov4 engine file too. ?

Yes, please share the result of $ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so* as well.

Below is the output $ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*

But tlt-converter again giving me the error:

smarg@smarg-NX:~/Documents/Pritam/Models/yolov4$ ./tlt-converter -k nvidia_tlt -d 3,544,960 -e trt.fp16.engine -t fp16 -p Input,1x3x544x960,1x3x544x960,1x3x544x960 yolov4_resnet18.etlt
[ERROR] UffParser: Unsupported number of graph 0
[ERROR] Failed to parse the model, please check the encoding key to make sure it's correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)

Can you run
$ md5sum yolov4_resnet18.etlt

Result of $ md5sum yolov4_resnet18.etlt

d41d8cd98f00b204e9800998ecf8427e yolov4_resnet18.etlt

It is different. Mine is as below. Can you double check if it is downloaded correctly?
$ md5sum yolov4_resnet18.etlt
69f6e4cbeaa4df4e95f8808fa9167e52 yolov4_resnet18.etlt

Let me download again the file.

Please see GitHub - NVIDIA-AI-IOT/deepstream_tao_apps at release/tlt3.0
$ wget https://nvidia.box.com/shared/static/i1cer4s3ox4v8svbfkuj5js8yqm3yazo.zip -O models.zip

Thanks @Morganh

Now it is converting

smarg@smarg-NX:~/Documents/Pritam/deepstream_tao_apps/models/yolov4$ ./tlt-converter -k nvidia_tlt -d 3,544,960 -e trt.fp16.engine -t fp16 -p Input,1x3x544x960,1x3x544x960,1x3x544x960 yolov4_resnet18.etlt
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[WARNING] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[INFO] ModelImporter.cpp:135: No importer registered for op: BatchedNMSDynamic_TRT. Attempting to import as plugin.
[INFO] builtin_op_importers.cpp:3659: Searching for plugin: BatchedNMSDynamic_TRT, plugin_version: 1, plugin_namespace: 
[INFO] builtin_op_importers.cpp:3676: Successfully created plugin: BatchedNMSDynamic_TRT
[INFO] Detected input dimensions from the model: (-1, 3, 544, 960)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile opt shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile max shape: (1, 3, 544, 960) for input: Input
[INFO] 
[INFO] --------------- Layers running on DLA: 
[INFO] 
[INFO] --------------- Layers running on GPU: 
[INFO] conv1 + Relu16, block_1a_conv_1 + Relu15, block_1a_conv_2, block_1a_conv_shortcut + Add7 + Relu13, block_1b_conv_1 + Relu12, block_1b_conv_2, block_1b_conv_shortcut + Add5 + Relu9, block_2a_conv_1 + Relu8, block_2a_conv_2, block_2a_conv_shortcut + Add3 + Relu5, block_2b_conv_1 + Relu4, block_2b_conv_2, block_2b_conv_shortcut + Add1 + Relu1, block_3a_conv_1 + Relu7, block_3a_conv_2, block_3a_conv_shortcut + Add2 + Relu3, block_3b_conv_1 + Relu2, block_3b_conv_2, block_3b_conv_shortcut + Add + Relu, block_4a_conv_1 + Relu14, block_4a_conv_2, block_4a_conv_shortcut + Add6 + Relu11, block_4b_conv_1 + Relu10, block_4b_conv_2, block_4b_conv_shortcut + Add4 + Relu6, block_4b_relu/Relu:0_pooling_2, block_4b_relu/Relu:0_pooling_1, block_4b_relu/Relu:0_pooling, block_4b_relu/Relu:0 copy, yolo_spp_conv, yolo_spp_conv_lrelu, yolo_expand_conv1, yolo_expand_conv1_lrelu, yolo_conv1_1, yolo_conv1_1_lrelu, yolo_conv1_2, yolo_conv1_2_lrelu, yolo_conv1_3, yolo_conv1_3_lrelu, yolo_conv1_4, yolo_conv1_4_lrelu, yolo_conv1_5, yolo_conv1_5_lrelu, yolo_conv2, yolo_conv1_6, yolo_conv2_lrelu, yolo_conv1_6_lrelu, conv_big_object, Resize, upsample0/transpose_1:0 copy, block_3b_relu/Relu:0 copy, yolo_conv3_1, yolo_conv3_1_lrelu, yolo_conv3_2, yolo_conv3_2_lrelu, yolo_conv3_3, yolo_conv3_3_lrelu, yolo_conv3_4, yolo_conv3_4_lrelu, yolo_conv3_5, yolo_conv3_5_lrelu, yolo_conv4, yolo_conv3_6, yolo_conv4_lrelu, yolo_conv3_6_lrelu, conv_mid_object, Resize1, upsample1/transpose_1:0 copy, block_2b_relu/Relu:0 copy, yolo_conv5_1, yolo_conv5_1_lrelu, yolo_conv5_2, yolo_conv5_2_lrelu, yolo_conv5_3, yolo_conv5_3_lrelu, yolo_conv5_4, yolo_conv5_4_lrelu, yolo_conv5_5, yolo_conv5_5_lrelu, yolo_conv5_6, yolo_conv5_6_lrelu, conv_sm_object, (Unnamed Layer* 128) [Constant], bg_anchor/Identity:0_tile, bg_anchor/Reshape_reshape, Transpose + bg_reshape + bg_bbox_processor/Reshape_reshape, bg_bbox_processor/Reshape:0_cropping, bg_bbox_processor/Reshape:0_cropping_2, PWN(PWN(bg_bbox_processor/Sigmoid, (Unnamed Layer* 205) [Constant] + (Unnamed Layer* 206) [Shuffle] + bg_bbox_processor/mul), PWN((Unnamed Layer* 210) [Constant] + (Unnamed Layer* 211) [Shuffle], bg_bbox_processor/sub)), bg_bbox_processor/Reshape:0_cropping_1, PWN(PWN(PWN((Unnamed Layer* 201) [Constant] + (Unnamed Layer* 202) [Shuffle] + bg_bbox_processor/add, bg_bbox_processor/sub_1), bg_bbox_processor/Minimum_min), bg_bbox_processor/Exp), bg_bbox_processor/sub:0 copy, bg_bbox_processor/Exp:0 copy, bg_bbox_processor/Reshape:0_cropping0 copy, bg_bbox_processor/Reshape_1_reshape, bg_anchor/Reshape:0 copy, bg_bbox_processor/Reshape_1:0 copy, (Unnamed Layer* 251) [Constant], md_anchor/Identity:0_tile, md_anchor/Reshape_reshape, Transpose1 + md_reshape + md_bbox_processor/Reshape_reshape, md_bbox_processor/Reshape:0_cropping, md_bbox_processor/Reshape:0_cropping_2, PWN(PWN(md_bbox_processor/Sigmoid, (Unnamed Layer* 328) [Constant] + (Unnamed Layer* 329) [Shuffle] + md_bbox_processor/mul), PWN((Unnamed Layer* 333) [Constant] + (Unnamed Layer* 334) [Shuffle], md_bbox_processor/sub)), md_bbox_processor/Reshape:0_cropping_1, PWN(PWN(PWN((Unnamed Layer* 324) [Constant] + (Unnamed Layer* 325) [Shuffle] + md_bbox_processor/add, md_bbox_processor/sub_1), md_bbox_processor/Minimum_min), md_bbox_processor/Exp), md_bbox_processor/sub:0 copy, md_bbox_processor/Exp:0 copy, md_bbox_processor/Reshape:0_cropping0 copy, md_bbox_processor/Reshape_1_reshape, md_anchor/Reshape:0 copy, md_bbox_processor/Reshape_1:0 copy, (Unnamed Layer* 365) [Constant], sm_anchor/Identity:0_tile, sm_anchor/Reshape_reshape, Transpose2 + sm_reshape + sm_bbox_processor/Reshape_reshape, sm_bbox_processor/Reshape:0_cropping, sm_bbox_processor/Reshape:0_cropping_2, PWN(PWN(sm_bbox_processor/Sigmoid, (Unnamed Layer* 437) [Constant] + (Unnamed Layer* 438) [Shuffle] + sm_bbox_processor/mul), PWN((Unnamed Layer* 441) [Constant] + (Unnamed Layer* 442) [Shuffle], sm_bbox_processor/sub)), sm_bbox_processor/Reshape:0_cropping_1, PWN(PWN(PWN((Unnamed Layer* 434) [Constant] + (Unnamed Layer* 435) [Shuffle] + sm_bbox_processor/add, sm_bbox_processor/sub_1), sm_bbox_processor/Minimum_min), sm_bbox_processor/Exp), sm_bbox_processor/sub:0 copy, sm_bbox_processor/Exp:0 copy, sm_bbox_processor/Reshape:0_cropping0 copy, sm_bbox_processor/Reshape_1_reshape, sm_anchor/Reshape:0 copy, sm_bbox_processor/Reshape_1:0 copy, encoded_bg/concat:0 copy, encoded_md/concat:0 copy, encoded_sm/concat:0 copy, cls/Reshape_reshape, cls/Reshape:0_cropping, cls/Reshape:0_cropping_1, PWN(cls/Sigmoid_1, PWN(cls/Sigmoid, cls/mul)), box/Reshape_reshape, box/Reshape:0_cropping, box/Reshape:0_cropping_2, box/Reshape:0_cropping_3, PWN(box/mul_1, box/add_1), box/Reshape:0_cropping_4, box/Reshape:0_cropping_5, PWN(box/mul_3, (Unnamed Layer* 698) [Constant] + (Unnamed Layer* 699) [Shuffle] + box/mul_4), box/sub, box/Reshape:0_cropping_1, box/Reshape:0_cropping_6, box/Reshape:0_cropping_7, PWN(box/mul, box/add), box/Reshape:0_cropping_8, box/Reshape:0_cropping_9, PWN(box/mul_2, (Unnamed Layer* 702) [Constant] + (Unnamed Layer* 703) [Shuffle] + box/mul_6), box/sub_1, box/add_2, box/add_3, box/sub:0 copy, box/sub_1:0 copy, box/add_2:0 copy, box/add_3:0 copy, BatchedNMS_N, 
[INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.


Great~

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.