Incorrect results using LPRNet model

• Hardware Platform: GPU
Tensor RT version: TensorRT 8.6.1.6
LPRnet model

Getting incorrect result for number plate recognition using LPRNet model

actual number plate had no: A22345
but getting result as 66D968

inference output result:
tf_op_layer_ArgMax = [[35 35 35 35 35 6 35 6 35 35 13 35 35 35 35 9 35 6 35 35 35 35 35 8]]
tf_op_layer_Max = [[1. 1. 1. 1. 1. 0.36572266
0.99121094 0.41015625 0.9291992 0.9921875 0.29785156 0.99902344
0.88427734 0.56640625 1. 0.29760742 0.44873047 0.7988281
0.9941406 1. 1. 1. 0.99902344 0.80322266]]

Please share the detailed step and command , etc.

Running the lprnet_tao model in the nvidia triton inference server nvcr.io/nvidia/tritonserver:23.12-py3

Using the following command to run the client:
python tao_client.py --verbose /home/tao_triton/tao-toolkit-triton-apps/images/img.jpg --mode LPRNet -m lprnet_tao -b 1 -i http -u localhost:8000 --output_path /home/tao_toolkit/tao-toolkit-triton-apps-main/tao_triton/tao_output

Did you ever run default lprnet with the steps mentioned in GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton? Is it correct?

thank you …i was getting wrong results with some random images.
I am using a model which i converted using tao converter and I am getting the results correctly now. Is there any sample client code that can be used for converting and running LPDNet model ?

Thanks for the info.

The LPDNet has two versions. One is trained with detectent_v2 network. Another is trained with YOLOv4_tiny network. For detectnet_v2 network, you can refer to tao-toolkit-triton-apps/tao_triton/python/model/detectnet_model.py at main · NVIDIA-AI-IOT/tao-toolkit-triton-apps · GitHub.

thank you so much. I am not able to convert LPRNet model using tao convert. could you suggest any changes i should make in this…for using detectnet_v2 network, is this the same model which should be used?

./tao-converter /home/lpdnet/LPDNet_usa_pruned_tao5.onnx
-k nvidia_tlt
-p image_input,1x3x480x640,4x3x480x640,16x3x480x640
-t fp16
-e /home/tao_converted_models/lpdnet.plan

Yes, this LPDNet_usa_pruned_tao5.onnx is based on detectnet_v2 network. For the onnx file, you should use trtexec to convert.
Please refer to TRTEXEC with DetectNet-v2 - NVIDIA Docs.

Thankyou.
I used the LPDNet_usa_pruned_tao5.onnx file itself to load and run on triton inference server and it is being run correctly.
this is the config.pbtxt file

name: “lpdnet”
platform: “onnxruntime_onnx”
max_batch_size : 1
input [
{
name: “input_1:0”
data_type:TYPE_FP32
dims: [3,480,640]
}
]
output [
{
name: “output_bbox/BiasAdd:0”
data_type:TYPE_FP32
dims: [4,30,40]
}
]
output [
{
name: “output_cov/Sigmoid:0”
data_type:TYPE_FP32
dims: [1,30,40]
}
]

How to do post processing on the inference results of this LPDNet model.? Is the detectnet processor in tao client code sufficient for LPDNet postprocessing also? Do i need to have separate clustering config in this case.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Please refer to tao-toolkit-triton-apps/tao_triton/python/postprocessing/detectnet_processor.py at main · NVIDIA-AI-IOT/tao-toolkit-triton-apps · GitHub.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.