Using the following command to run the client:
python tao_client.py --verbose /home/tao_triton/tao-toolkit-triton-apps/images/img.jpg --mode LPRNet -m lprnet_tao -b 1 -i http -u localhost:8000 --output_path /home/tao_toolkit/tao-toolkit-triton-apps-main/tao_triton/tao_output
thank you …i was getting wrong results with some random images.
I am using a model which i converted using tao converter and I am getting the results correctly now. Is there any sample client code that can be used for converting and running LPDNet model ?
thank you so much. I am not able to convert LPRNet model using tao convert. could you suggest any changes i should make in this…for using detectnet_v2 network, is this the same model which should be used?
Yes, this LPDNet_usa_pruned_tao5.onnx is based on detectnet_v2 network. For the onnx file, you should use trtexec to convert.
Please refer to TRTEXEC with DetectNet-v2 - NVIDIA Docs.
Thankyou.
I used the LPDNet_usa_pruned_tao5.onnx file itself to load and run on triton inference server and it is being run correctly.
this is the config.pbtxt file
How to do post processing on the inference results of this LPDNet model.? Is the detectnet processor in tao client code sufficient for LPDNet postprocessing also? Do i need to have separate clustering config in this case.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks