Unable to obtain LPDNet model information

I have downloaded the LPDNet pre trained model in NGC and have converted it from tao to tensorrt model. I plan to deploy it to tritonserver, but I am unable to obtain the input and output information of the model. Do you have any information on this model?

Model address:
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/tao/lpdnet/versions/pruned_v2.1/zip -O lpdnet_pruned_v2.1.zip

Thank you for your reply

You can find the info in LPDNet | NVIDIA NGC

For lpdnet model which is trained on detectnet_v2 network, it is

uff-input-blob-name=input_1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd

You can also use “polygraphy inspect model xxx.engine” to check the input/output tensor.

1 Like

Thank you very much for your reply

I am using yolov4 based training for yolov4_tiny_ccpd_deployable.etlt, the following error occurred when viewing the input and output of the model using polygraph. Can you explain the input and output of the model?
Error content:
root@nvidia-B360M-D2V:/deepstream/TensorRT-8.2.5.1/bin# polygraphy inspect model /deepstream/tao-toolkit-triton-apps-main/model_repository/deepstream_lpr_app/models/LP/LPD/trafficc1.plan
[W] ‘colored’ module is not installed, will not use colors when logging. To enable colors, please install the ‘colored’ module: python3 -m pip install colored
[I] Loading bytes from /deepstream/tao-toolkit-triton-apps-main/model_repository/deepstream_lpr_app/models/LP/LPD/trafficc1.plan
[E] Assertion failed: d == a + length
batchedNMSPlugin/batchedNMSPlugin.cpp:115
Aborting…
Aborted (core dumped)

The input and output of the model :
name: “ch_lpd_yolov4-tiny”
platform: “tensorrt_plan”
max_batch_size: 4
default_model_filename: “yolov4_tiny_ccpd_deployable.etlt_b4_gpu0_int8.engine”
input [
{
name: “Input”
data_type: TYPE_FP32
format: FORMAT_NCHW
dims: [ 3, 1184, 736]
}
]

output [
{
name: “BatchedNMS”
data_type: TYPE_INT32
dims: [1]
},

{
name: “BatchedNMS_1”
data_type: TYPE_FP32
dims: [200, 4]
},

{
name: “BatchedNMS_2”
data_type: TYPE_FP32
dims: [200]
},

{
name: “BatchedNMS_3”
data_type: TYPE_FP32
dims: [200]
}
]

instance_group [
{
kind: KIND_GPU
count: 1
gpus: 0
}
]

For YOLOv4_tiny version, you can also find the info in the model card.

uff-input-blob-name=Input
output-blob-names=BatchedNMS

For [E] Assertion failed: d == a + length, please refer to Transfer Learning Toolkit v3.0 trtexec loading - #3

The download address for the yolov4_tiny_ccpd_deployable.etlt model is:

I found that the model has four output parameters listed above, which are “BatchedNMS”,“BatchedNMS_1”,“BatchedNMS_2”,“BatchedNMS_3”

Can you provide a detailed explanation of the four output parameters of the model used here? Or is there an official parameter description document?

Refer to https://github.com/NVIDIA/TensorRT/tree/main/plugin/batchedNMSPlugin

The boxes input and scores input generates the following four outputs:

* `num_detections` The `num_detections` output is of shape `[batch_size]`. It is an int32 tensor indicating the number of valid detections per batch item. It can be less than `keepTopK`. Only the top `num_detections[i]` entries in `nmsed_boxes[i]`, `nmsed_scores[i]` and `nmsed_classes[i]` are valid.
* `nmsed_boxes` A `[batch_size, keepTopK, 4]` float32 tensor containing the coordinates of non-max suppressed boxes.
* `nmsed_scores` A `[batch_size, keepTopK]` float32 tensor containing the scores for the boxes.
* `nmsed_classes` A `[batch_size, keepTopK]` float32 tensor containing the classes for the boxes.

Thanks a lot

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.