LPDNet post processing

• Hardware Platform: GPU
Tensor RT version: TensorRT 8.6.1.6

LPDNet model

using the LPDNet_usa_pruned_tao5.onnx file to load and run on triton inference server. I am getting the inference results using this model.

config file

name: “lpdnet”
platform: “onnxruntime_onnx”
max_batch_size : 1
input [
{
name: “input_1:0”
data_type:TYPE_FP32
format: FORMAT_NCHW
dims: [3,480,640]
}
]
output [
{
name: “output_cov/Sigmoid:0”
data_type:TYPE_FP32
dims: [1,30,40]
}
]
output [
{
name: “output_bbox/BiasAdd:0”
data_type:TYPE_FP32
dims: [4,30,40]
}
]
dynamic_batching { }

How to do post processing in this LPDNet model?
In the taotoolkil sample code for detectnet_v2 , a clustering config is required. What clustering config should i use for postprocessing in this case?
Is any code available for doing post processing in python

You can refer to tao-tf1 branch in tao_tensorflow1_backend/nvidia_tao_tf1/cv/detectnet_v2/scripts/inference.py at main · NVIDIA/tao_tensorflow1_backend · GitHub or tao-deploy branch in tao_deploy/nvidia_tao_deploy/cv/detectnet_v2/scripts/inference.py at main · NVIDIA/tao_deploy · GitHub.

Thankyou.

is the detectnet_v2 post processor of tao toolkit samplecode applicable for LPDNet model.
Or is it only for the PeopleNet model ? I tried to run the above post processing code (detectnet_processor), but there was no clustering config available in this case.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Actually the LPDNet and peoplenet are actually based on detectnet_v2 network.
They are the same.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.