Currently I’m using custom trained LPRNet model. It is showing correct prediction with Inferences, but with deepsteram missing last value.
The device i am working on:
Deepstream Version
deepstream-app --version
deepstream-app version 6.0.1
DeepStreamSDK 6.0.1
Gstreamer Version
gst-inspect-1.0 --gst-version
GStreamer Core Library version 1.14.5
Below is the config file I’m using:
[property]
gpu-id=0
tlt-encoded-model=/home/xx/deepstream_6_lpd_lpr/models/LP/LPR/lprnet_epoch-015.etlt
tlt-model-key=nvidia_tlt
model-engine-file=/home/xx/deepstream_6_lpd_lpr/models/LP/LPR/lprnet_epoch-015.etlt_b1_gpu0_fp16.engine
labelfile-path=/home/xx/deepstream_6_lpd_lpr/models/LP/LPR/labels_us.txt
#onnx-file=/home/xx/deepstream_6_lpd_lpr/models/LP/LPR/lprnet_epoch-015.onnx
batch-size=1
##0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=3
gie-unique-id=3
output-blob-names=tf_op_layer_ArgMax 24;tf_op_layer_Max 24
#0=Detection 1=Classifier 2=Segmentation
network-type=2
interval=2
parse-classifier-func-name=NvDsInferParseCustomNVPlate
custom-lib-path=…/nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so
process_mode=2
operate-on-gie-id=2
net-scale-factor=0.00392156862745098
#net-scale-factor=1.0
#0=RGB 1=BGR 2=GRAY
model-color-format=0
[class-attrs-all]
threshold=0.5
Could you share the full command and the full log? Thanks.
Also, could you elaborate more about the “missing last value”?
Spec file used for training:
tutorial_spec.txt (1.3 KB)
Below is one of the testing image:
Correct output: AP39TY2455
Deepstream predicted: AP39TY245
Inference output: AP39TY2455
The same thing happened with all the images in video tested using Deepstream.
Command used:
./deepstream-lpr-app 1 2 0 infer VJ_1H_part_2_1.mp4 output.264
Please let me know if I'm missing something important.
Does the lpdnet cover the full area of the license plate? If possible, could you share the output.264 via private message?
Command used:
./deepstream-lpr-app 1 2 0 infer /home/akashsingh/Documents/New_folder/sample_day.mp4/VJ_1H_part_1.mp4 output2.264
Using the above command, we are getting the following output at terminal.
But unable to get the output2.264.
training accuracy was 94%
dataset size:
training: 14330
val: 1594
Morganh
November 23, 2023, 7:14am
7
Please run as ./deepstream-lpr-app 1 1 0
Thanks @Morganh for your help, guiding me to find out the actual problem.
actually, the bounding boxes I am getting are not proper.
How can I solve this one?
Morganh
November 24, 2023, 10:54am
9
OK, similar issue as topic Onnx python post-processing vs. TAO train post processing - #6 by dan193 . The LPD is based on detectnet_v2 network. For detectnet_v2, please add --onnx_route tf2onnx
when run exporting.
Actually I’m using the “yolov4_tiny_usa_deployable.etlt”, pretrained model.
For converting .etlt to .engine file using:
tlt-encoded-model=/home/akash/deepstream_lpr_app/models/LP/LPD/yolov4_tiny_usa_deployable.etlt
tlt-model-key=nvidia_tlt
infer-dims=3;480;640
How can I use “–onnx_route tf2onnx” in this case.
Morganh
November 27, 2023, 4:58am
11
Is it possible to share a test video with me for reproducing? If yes, you can send it with private message.
Morganh
November 27, 2023, 9:28am
12
Hi,
Thanks for the info.
As you mentioned in the private message, when you made some changes in “nvdsinfer_custombboxparser_tao.cpp” file, and got the correct bounding boxes,
but the error still remains same. The last character or the number is missing from the lpr output string.
Does “number is missing” happen in every bbox?
The model returns the complete OCR, but its frequency is quite low. Out of 20 OCRs, only 2 or 3 are complete, and this occurs specifically for a few vehicles.
Hello @Morganh did you get any solution for the above problem.
Morganh
November 30, 2023, 2:43am
15
Sorry for late reply. I set up environment and run with default GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream . Get expected result against your test video.
Did you ever try default github successfully?
BTW, my step is as below.
$ docker run --runtime=nvidia -it --rm -v /home/morganh:/home/morganh nvcr.io/nvidia/deepstream:6.2-triton /bin/bash
git clone https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app.git
cd deepstream_lpr_app/
./download_convert.sh us 0
make
cd deepstream-lpr-app
cp dict_us.txt dict.txt
./deepstream-lpr-app 1 1 0 infer VJ_1H_part_2_1_crop.mp4 out.264
1 Like
system
Closed
December 18, 2023, 6:49am
16
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.