Running Predictions using Detectnet_v2

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc)
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

Hi, I have seen on how to train the LPD model in NVIDIA AI IOT github repository. I have trained my LPD model and I would like to use it in my python code to generate prediction using the onnx model. May i know are there any sample for me to follow? I will need to feed in an image and the model is able to detect the location of the license plate. Thanks.

You can refer to Inference on LPDNet onnx file . BTW, officially there is not a sample to run inference with onnxruntime.
In tao-deploy branch or triton-apps, they are going to run inference against tensorrt engine. Anyway, you can leverage them along with tao-tf1 branch.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.