Cannot generate LPDNet tensorrt engine

Hello. When I tried to generate LPDNet tensorrt engine, I got the error message as the contents below. (Model has dynamic shape but no optimization profile specified.)

Is there something wrong with my convert command ?

sudo docker run -it --rm -v /home/ubuntu/tao_test_2023/lpdnet/:/workspace/tao-experiments/lpdnet/ 
                            nvcr.io/nvidia/tao/tao-toolkit:4.0.1-tf1.15.5 
                            converter 
                            /workspace/tao-experiments/lpdnet/yolov4_tiny_usa_deployable.etlt 
                         -k nvidia_tlt 
                         -o output_cov/Sigmoid,output_bbox/BiasAdd 
                         -d 3,480,640 
                         -i nchw 
                         -m 64
                         -t fp16 
                         -e /workspace/tao-experiments/lpdnet/lpdnet.trt 
                         -b 32
==============================
=== TAO Toolkit TensorFlow ===
==============================

NVIDIA Release 4.0.1-TensorFlow (build )
TAO Toolkit Version 4.0.1

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

This container image and its contents are governed by the TAO Toolkit End User License Agreement.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/tao-toolkit-software-license-agreement

NOTE: Mellanox network driver detected, but NVIDIA peer memory driver not
      detected.  Multi-node communication performance may be reduced.

[INFO] [MemUsageChange] Init CUDA: CPU +208, GPU +0, now: CPU 220, GPU 378 (MiB)
[INFO] [MemUsageChange] Init builder kernel library: CPU +122, GPU +22, now: CPU 396, GPU 400 (MiB)
[WARNING] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
[INFO] ----------------------------------------------------------------
[INFO] Input filename:   /tmp/fileq7Wqif
[INFO] ONNX IR version:  0.0.7
[INFO] Opset version:    12
[INFO] Producer name:    
[INFO] Producer version: 
[INFO] Domain:           
[INFO] Model version:    0
[INFO] Doc string:       
[INFO] ----------------------------------------------------------------
[WARNING] parsers/onnx/onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[WARNING] parsers/onnx/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[INFO] No importer registered for op: BatchedNMSDynamic_TRT. Attempting to import as plugin.
[INFO] Searching for plugin: BatchedNMSDynamic_TRT, plugin_version: 1, plugin_namespace: 
[WARNING] parsers/onnx/builtin_op_importers.cpp:5225: Attribute caffeSemantics not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[INFO] Successfully created plugin: BatchedNMSDynamic_TRT
[INFO] Detected input dimensions from the model: (-1, 3, 480, 640)
[ERROR] Model has dynamic shape but no optimization profile specified.

Thank you for your help in advance.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

For LPDnet, there are two versions. One is trained based on detectnet_v2 network. Another is trained based on yolov_v4_tiny network.
You are using yolo_v4_tiny version.
So, please use something like below.

 converter -k $KEY  \
                    -d 3,384,1248 \
                    -o BatchedNMS \
                    -e /export/trt.fp16.engine \
                    -t fp16 \
                    -i nchw \
                    -m 8 \
                    yolov4_tiny_usa_deployable.etlt

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.