Error in running LPR app

• Hardware Platform (Jetson / GPU) Orin AGX
• DeepStream Version 6.1.1
• JetPack Version (valid for Jetson only) 5.0.2
• TensorRT Version 8.4.1.5
• Issue Type( questions, new requirements, bugs) Bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I have cloned the repo of DeepStream LPR app (GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream) and followed all instructions to build and run the application.

I try to execute the app using the following command:

./deepstream-lpr-app 1 3 0 infer /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 out.mp4

but I get the following error:

:14:28.834046648 8802 0xaaaaf86d18d0 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1955> [UID = 1]: serialize cuda engine to file: /home/nvidia/third-party-repos/deepstream_lpr_app/models/tao_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b4_gpu0_int8.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 4x34x60

0:14:29.016425802 8802 0xaaaaf86d18d0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:trafficamnet_config.txt sucessfully
Running…
qtdemux pad video/x-h264
h264parser already linked. Ignoring.
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Frame Number = 0 Vehicle Count = 0 Person Count = 0 License Plate Count = 0
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:4184: => VIC Configuration failed image scale factor exceeds 16, use GPU for Transformation
0:14:29.358124864 8802 0xaaaaf79aef60 WARN nvinfer gstnvinfer.cpp:1389:convert_batch_and_push_to_input_thread: error: NvBufSurfTransform failed with error -3 while converting buffer
ERROR from element secondary-infer-engine1: NvBufSurfTransform failed with error -3 while converting buffer
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1389): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline/GstNvInfer:secondary-infer-engine1
Returned, stopping playback
Frame Number = 1 Vehicle Count = 0 Person Count = 0 License Plate Count = 0

** (deepstream-lpr-app:8802): WARNING **: 15:38:32.577: Use gst_egl_image_allocator_alloc() to allocate from this allocator
[NvMultiObjectTracker] De-initialized
Average fps 0.000000
Totally 0 plates are inferred

Please add below properties into lpd_yolov4-tiny_us.txt
if scaling-compute-hw = VIC, input-object-min-height/width need to be even and greater than or equal to (model height or width)/16

input-object-min-height=30
input-object-min-width=40

1 Like

I couldn’t find scaling-compute-hw inside lpd_yolov4-tiny_us.txt file.

After adding these lines

input-object-min-height=30
input-object-min-width=40

the app executes ok.

Also

model is the LPD model right? Can you please share the model’s dimensions?

Yes, LPD model.
infer-dims=3;480;640

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.