Creating a Real-Time License Plate Detection and Recognition App

@Morganh Something kinda interesting. I created another clone to re-run the tlt converter to add an output.

Previous Command

./tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/lpr_us_onnx_b16.engine

New Command: adding -o output_cov/Sigmoid,output_bbox/BiasAdd

./tlt-converter -k nvidia_tlt -o output_cov/Sigmoid,output_bbox/BiasAdd -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/lpr_us_onnx_b16.engine

New unsuccessful command to build/run, but maybe right direction?

bryan@bryan-desktop:~/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/deepstream-lpr-app$ ./deepstream-lpr-app 1 2 0 ../../../../streams/sample_720p.mp4 ../../../../streams/sample_720p.mp4 output.264
Request sink_0 pad from streammux
Request sink_1 pad from streammux
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Unknown or legacy key specified 'process_mode' for group [property]
Now playing: 1
Opening in BLOCKING MODE
Opening in BLOCKING MODE 
Opening in BLOCKING MODE
Opening in BLOCKING MODE 
0:00:10.921601849 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 3]: deserialized trt engine from :/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/models/LP/LPR/lpr_us_onnx_b16.engine
INFO: [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT image_input     3x48x96         min: 1x3x48x96       opt: 4x3x48x96       Max: 16x3x48x96      
1   OUTPUT kINT32 tf_op_layer_ArgMax 24              min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT tf_op_layer_Max 24              min: 0               opt: 0               Max: 0               

ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_bbox/BiasAdd
0:00:10.921851646 19412   0x55639b70d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1670> [UID = 3]: Could not find output layer 'output_bbox/BiasAdd' in engine
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_cov/Sigmoid
0:00:10.921900970 19412   0x55639b70d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1670> [UID = 3]: Could not find output layer 'output_cov/Sigmoid' in engine
0:00:10.921961909 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 3]: Use deserialized engine model: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/models/LP/LPR/lpr_us_onnx_b16.engine
0:00:10.962857824 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary-infer-engine2> [UID 3]: Load new model:lpr_config_sgie_us.txt sucessfully
0:00:10.963561485 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 2]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:01:43.118323033 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 2]: serialize cuda engine to file: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/models/LP/LPD/usa_pruned.etlt_b16_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x480x640       
1   OUTPUT kFLOAT output_bbox/BiasAdd 4x30x40         
2   OUTPUT kFLOAT output_cov/Sigmoid 1x30x40         

0:01:43.281237925 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary-infer-engine1> [UID 2]: Load new model:lpd_us_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
!! [WARNING][NvDCF] Unknown param found: minMatchingScore4Motion
[NvDCF][Warning] `minTrackingConfidenceDuringInactive` is deprecated
!! [WARNING][NvDCF] Unknown param found: matchingScoreWeight4Motion
[NvDCF] Initialized
ERROR: Deserialize engine failed because file path: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/deepstream-lpr-app/../models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine open error
0:01:45.437559981 19412   0x55639b70d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/deepstream-lpr-app/../models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine failed
0:01:45.437606544 19412   0x55639b70d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/deepstream-lpr-app/../models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine failed, try rebuild
0:01:45.437640139 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:02:33.741014630 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 1]: serialize cuda engine to file: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:02:34.035270946 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-infer-engine1> [UID 1]: Load new model:trafficamnet_config.txt sucessfully
Running...
qtdemux pad video/x-h264
qtdemux pad video/x-h264
h264parser already linked. Ignoring.
h264parser already linked. Ignoring.
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 
Frame Number = 0 Vehicle Count = 0 Person Count = 0 License Plate Count = 0
Frame Number = 1 Vehicle Count = 0 Person Count = 0 License Plate Count = 0
Frame Number = 2 Vehicle Count = 0 Person Count = 0 License Plate Count = 0
Frame Number = 3 Vehicle Count = 6 Person Count = 4 License Plate Count = 0
Frame Number = 4 Vehicle Count = 6 Person Count = 4 License Plate Count = 0
Frame Number = 5 Vehicle Count = 6 Person Count = 4 License Plate Count = 0
Frame Number = 6 Vehicle Count = 6 Person Count = 4 License Plate Count = 0
Frame Number = 7 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 8 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 9 Vehicle Count = 10 Person Count = 4 License Plate Count = 0
Frame Number = 10 Vehicle Count = 10 Person Count = 4 License Plate Count = 0
Frame Number = 11 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 12 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 13 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 14 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 15 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 16 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 17 Vehicle Count = 10 Person Count = 4 License Plate Count = 0
open dictionary file failed.
0:02:40.895026375 19412   0x5563261d40 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Error in NvDsInferContextImpl::fillClassificationOutput() <nvdsinfer_context_impl_output_parsing.cpp:796> [UID = 3]: Failed to parse classification attributes using custom parse function
open dictionary file failed.
0:02:40.895353049 19412   0x5563261d40 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Error in NvDsInferContextImpl::fillClassificationOutput() <nvdsinfer_context_impl_output_parsing.cpp:796> [UID = 3]: Failed to parse classification attributes using custom parse function
Segmentation fault (core dumped)

Alike that in the previous log/error, I see this still: ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_bbox/BiasAdd