License Plate Recognition not working properly with tensor input

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
DeepStream 6.0
• NVIDIA GPU Driver Version (valid for GPU only)
Driver Version: 470.103.01

Hi.

I am running the following pipeline for license plate recognition:
uridecodebin -> nvstreammux -> primary nvinfer (car detection) -> nvtracker -> secondary nvinfer (license plate detection) -> nvdspreprocess -> primary nvinfer (OCR on tensor input) -> nvmultistreamtiler -> nvvideoconvert -> nvdsosd -> nveglglessink

I added nvdspreprocess element as I would like to do some pre-processing on license plates in the future (perspective correction), as suggested here. I added some changes to gstnvdspreprocess.cpp to change the ROI in runtime to be the detected object from previous model (license plate) instead of full frame. I can see the ROIs fine on the display.

The thing is, the pipeline works fine (exactly as in the deepstream LPR example) until I add the pre-process element. In other words, I get meaningful output on the display, but when I add the preprocess element and change the OCR model to primary model, I get no output from it (no character labels). Even though the custom parser interface for OCR, NvDsInferParseCustomNVPlate, is being invoked and returning meaningful values (I was able to print license plate characters during runtime but not display them).

[property]
enable=1
gpu-id=0

unique-id=3
    # should be same as model gie id (don't know why)
target-unique-ids=4


    # 0=NCHW, 1=NHWC, 2=CUSTOM
network-input-order=0

processing-width=96
processing-height=48

scaling-buf-pool-size=6
tensor-buf-pool-size=6

    # tensor shape based on network-input-order
network-input-shape= 2;3;48;96
    # 0=RGB, 1=BGR, 2=GRAY
network-color-format=0

    # 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
tensor-data-type=0
tensor-name=image_input

    # 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE 3=NVBUF_MEM_CUDA_UNIFIED
scaling-pool-memory-type=0
    # 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU 2=NvBufSurfTransformCompute_VIC
scaling-pool-compute-hw=1
    # Scaling Interpolation method
    # 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
    # 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
    # 6=NvBufSurfTransformInter_Default
scaling-filter=0

custom-lib-path=../../gst-nvdspreprocess/nvdspreprocess_lib/libcustom2d_preprocess.so
custom-tensor-preparation-function=CustomTensorPreparation

[user-configs]
pixel-normalization-factor=0.00392156862745098

[group-0]
src-ids=0
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=0
roi-params-src-0=0;0;200;200

[group-1]
src-ids=1
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=0
roi-params-src-1=350;200;300;100
  • primary nvinfer config (OCR) (I already set the input-tensor-meta boolean to true)
[property]

gie-unique-id=4
#operate-on-gie-id=3
#operate-on-class-ids=0
output-tensor-meta=1

process-mode=1
#force-implicit-batch-dim=1

gpu-id=0
model-engine-file=models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b2_gpu0_fp32.engine
labelfile-path=models/LP/LPR/labels_us.txt
tlt-encoded-model=models/LP/LPR/us_lprnet_baseline18_deployable.etlt
tlt-model-key=nvidia_tlt

#output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid

batch-size=2

    ## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=3

    #0=Detection 1=Classifier 2=Segmentation
network-type=1

parse-classifier-func-name=NvDsInferParseCustomNVPlate
custom-lib-path=nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so

net-scale-factor=0.00392156862745098

    #0=RGB 1=BGR 2=GRAY
model-color-format=0

[class-attrs-all]
threshold=0.5

What could I possibly be missing here? Thanks.

nvdspreprocess and nvinfer are all open source. Please debug by yourself.

Hi,

Thanks for your reply.

After some debugging, I figured when input_tensor_from_meta is enabled, the classifier meta is added to frame->roi_meta and not object_meta (nvds_add_classifier_meta_to_roi is used instead of nvds_add_classifier_meta_to_object). Maybe that’s why nvdsosd is unable to display it?

Any idea how can I make nvdsosd display classifier meta attached to RIOs and not objects?

Thanks

An update: In attach_metadata_classifier function, when input_tensor_from_meta flag is enabled, text and rect parameters are not filled and nvds_add_obj_meta_to_frame is not invoked at all. I had to handle this case myself and now I am able to see the text display on screen.

Please confirm from your side if this is a bug in gst-nvinfer as found or if I am missing a parameter.

This is not a bug. The display of the customized text, rect, circle,… can be handled by customized NvDsDisplayMeta.

This is not customized text, rect or circle! This is the classification output from the inferred model that is neither attached to object meta nor displayed by nvdsosd.

For the case of using “input_tensor_from_meta”, we treat it as customized model.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.