Postprocess for lprnet output

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc)
RTX4060
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
LPRnet
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

[array([[36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36,
36, 36, 36, 36, 36, 36, 36, 36]], dtype=int32), array([[0.30274895, 0.86494905, 0.9930987 , 0.9981476 , 0.9954442 ,
0.9991584 , 0.9989754 , 0.99612707, 0.9935313 , 0.994934  ,
0.98253965, 0.99814665, 0.9967269 , 0.9971048 , 0.99921846,
0.99689305, 0.9958799 , 0.9927529 , 0.98989385, 0.99216217,
0.99449193, 0.9957177 , 0.9970091 , 0.99679095]], dtype=float32)]

I have used tensorrt to load the engine model to predict license plate image, the prediction result shown above.
This is the image I have used to predict.
BD14_46.9849

The documentation from NGC catalog state that post processing will be needed to get the final license plate result. Any post processing advice I should try to get the final result.

You can refer to the tao-deploy branch. For example, take a look at tao_deploy/nvidia_tao_deploy/cv/lprnet/utils.py at main · NVIDIA/tao_deploy · GitHub and tao_deploy/nvidia_tao_deploy/cv/lprnet/scripts/inference.py at main · NVIDIA/tao_deploy · GitHub

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.