Secondary OCR license plate after detected

Hi,
I want to detect license plate and ocr all characters in the license plate.
I had trained the detected model using detectnet_v2 in TLT,then I deployed it in DP5 in Jetson TX2, it can detected and draw box for the license plate in the h264. Now, I want to secondary OCR the license plate detected by the networking.
The detected license plate shows here.


How can I using my own ocr model(.pth model) in the deepstream5 to help to ocr license plate?
I trained my ocr model used Pytorch in below pars.
–Transformation TPS --FeatureExtraction ResNet --SequenceModeling BiLSTM --Prediction Attn

**• Hardware Platform : Jetson TX2
**• DeepStream Version : 5.0
**• JetPack Version (valid for Jetson only): Jetpack 4.4
**• TensorRT Version: 7.1.0

Hi,

Since Deepstream use TensorRT as inference engine, it’s recommended to check if your model can be converted to TensorRT first.
To do this, please follow the introduction in this sample:

/usr/src/tensorrt/samples/python/network_api_pytorch_mnist

Thanks.

1 Like

Hi,
When I do my secondary ocr, I changed to my another caffe model. But it shows ‘could not find output coverage layer for parsing objects’.

  • Here is my application config file.

jh.log (4.0 KB)

  • Here is my secondary config file.

config_infer_secondary_plr.log (2.6 KB)

  • Here is my prototxt file of caffe model.

CharacterRecognization.log (1.9 KB)

When I run
deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/jh.txt
It shows error.
0:00:01.369311799 27295 0x13e3f490 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 5]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: [TRT]: Detected 1 inputs and 1 output network tensors.
0:00:07.249524702 27295 0x13e3f490 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1624> [UID = 5]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/jh/CharacterRecognization.caffemodel_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 2
*0 INPUT kFLOAT data 1x30x14 *
*1 OUTPUT kFLOAT prob 65x1x1 *

0:00:07.279213504 27295 0x13e3f490 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<secondary_gie_0> [UID 5]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_secondary_plr.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:08.170832131 27295 0x13e3f490 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/jh/jh.etlt_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
*0 INPUT kFLOAT input_1 3x1088x1920 *
*1 OUTPUT kFLOAT output_bbox/BiasAdd 4x68x120 *
*2 OUTPUT kFLOAT output_cov/Sigmoid 1x68x120 *

0:00:08.170995203 27295 0x13e3f490 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/…/…/models/jh/jh.etlt_b1_gpu0_fp16.engine
0:00:08.175546157 27295 0x13e3f490 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config-jh.txt sucessfully

Runtime commands:

  • h: Print this help*

  • q: Quit*

  • p: Pause*

  • r: Resume*

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.

  •  To go back to the tiled display, right-click anywhere on the window.*
    

***PERF: FPS 0 (Avg) *
**PERF: 0.00 (0.00) *
*** INFO: <bus_callback:181>: Pipeline ready

*Opening in BLOCKING MODE *
*NvMMLiteOpen : Block : BlockType = 261 *
*NVMEDIA: Reading vendor.tegra.display-size : status: 6 *
NvMMLiteBlockCreate : Block : BlockType = 261 *
*** INFO: <bus_callback:167>: Pipeline running

KLT Tracker Init
***PERF: 25.84 (25.55) *
***PERF: 25.18 (25.26) *
***PERF: 25.13 (25.17) *
***PERF: 25.03 (25.12) *
0:00:30.053654718 27295 0x137f0140 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 5]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 5]: Could not find output coverage layer for parsing objects
0:00:30.053938943 27295 0x137f0140 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 5]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:573> [UID = 5]: Failed to parse bboxes
Segmentation fault (core dumped)

Hi,

Coverage and bbox are the standard output layer used in the detector.
It looks like your secondary model is a classifier, so please update the parameter here:
https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_details.3.01.html#wwpID0E0WDB0HA

network-type
Integer
0: Detector
1: Classifier
2: Segmentation

Thanks.