Deepstream Python LPR errors

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6.2-b5
• TensorRT Version 8.2.1-1+cuda10.2
• Issue Type( questions, new requirements, bugs) Question

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I tried to run the lpr python version for deepstream from here:

But I have the following error:
gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f72470768 (GstCapsFeatures at 0x7eb4040220)>
0:00:10.803704206 2279 0x2f35ed40 WARN nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:10.803937595 2279 0x2f35ed40 WARN nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop: error: streaming stopped, reason error (-5)
Error: gst-stream-error-quark: Internal data stream error. (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2288): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:secondary2-nvinference-engine:
streaming stopped, reason error (-5)
Exiting app

Any idea why this is not working? Other Python apps are running without issues.

I tracked it down to the lpr_us_onnx_b16.engine which I didn’t have. But now when I try to convert it, I got the following errors:
Command:
./tao-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96
deepstream-lpr-app/models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -e deepstream-lpr-app/models/LP/LPR/lpr_us_onnx_b16.engine

Error: no input dimensions given

Any idea how this can be fixed?

Which tao-converter version you used?

I have tao-converter-jp46-trt8.0.1.6

Please use compatible version, you should use TRT8.2 version.
TAO Converter | NVIDIA NGC v3.22.05_trt8.2_x86