TAO LPRNET inference

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) RTX4060
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) LPRnet
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) 5.0
• Training spec file(If have, please share here)
lpr_spec.txt (1.1 KB)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

Hi, i have trained a TAO model and the output is an onnx model. I am trying to run inference using the TAO toolkit with the below command:

tao model lprnet inference -m /workspace/model.onnx -i /workspace/image.jpg -e /workspace/lpr_spec.txt -k nvidia_tlt

However, it prompt the below error:

INFO: Starting LPRNet Inference.
INFO: Merging specification from /workspace/tao_ws/lpr_spec.txt
Unsupported model type: .onnx
INFO: Inference was interrupted
Execution status: PASS

Any advice? May i know why it prompted unsupported model type. Thanks

Please refer to notebook tao_tutorials/notebooks/tao_launcher_starter_kit/lprnet/lprnet.ipynb at main · NVIDIA/tao_tutorials · GitHub or user guide.
The tao model lprnet inference runs against hdf5 file.

Can you guide me how to convert onnx to hdrf5 model format?

You can get the hdf5 file during training. Please find it.

Im so sorry, is there any ways that I could convert onnx to hdf5 because I think I have deleted the hdf5 before and I couldn’t get it back anymore.

No, we cannot convert onnx to hdf5.

byom

Is this useful?

Suggest you to train again to get the hdf5 files.

Can you search result folder if there are .hdf5 files or .tlt files?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.