Creating a Real-Time License Plate Detection and Recognition App

I am trying to “Convert the encrypted LPR ONNX model to a TLT engine”, however, in running the command:

./tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 ./us_lprnet_baseline18_deployable.etltunpruned.etlt -t fp16 -e /opt/nvidia/deepstream/deepstream-5.1/samples/models/LP/LPR/lpr_us_onnx_b16.engine

I am receiving the error:
Error: no input dimensions given

Do you happen to know the dimensions we should be using for this?
cc/ @Morganh (the github repo seems to have the same issues as noted here

cc/ @TomK

@bbb
Can you run following command
$ ./tlt-converter -h

Did you download the latest version of tlt-converter ?
See Overview — Transfer Learning Toolkit 3.0 documentation
or Overview — Transfer Learning Toolkit 3.0 documentation

Yes, I can run it and it runs as it should. Output:

bryan@bryan-desktop:~/cuda10.2_trt7.1_jp4.5$ ./tlt-converter -h
usage: ./tlt-converter [-h] [-v] [-e ENGINE_FILE_PATH]
	[-k ENCODE_KEY] [-c CACHE_FILE]
	[-o OUTPUTS] [-d INPUT_DIMENSIONS]
	[-b BATCH_SIZE] [-m MAX_BATCH_SIZE]
	[-w MAX_WORKSPACE_SIZE] [-t DATA_TYPE]
	[-i INPUT_ORDER] [-s] [-u DLA_CORE]
	input_file

Generate TensorRT engine from exported model

positional arguments:
  input_file		Input file (.etlt exported model).

required flag arguments:
  -d		comma separated list of input dimensions(not required for TLT 3.0 new models).
  -k		model encoding key.

optional flag arguments:
  -b		calibration batch size (default 8).
  -c		calibration cache file (default cal.bin).
  -e		file the engine is saved to (default saved.engine).
  -i		input dimension ordering -- nchw, nhwc, nc (default nchw).
  -m		maximum TensorRT engine batch size (default 16). If meet with out-of-memory issue, please decrease the batch size accordingly.
  -o		comma separated list of output node names (default none).
  -p		comma separated list of optimization profile shapes in the format <input_name>,<min_shape>,<opt_shape>,<max_shape>, where each shape has the format: <n>x<c>x<h>x<w>. Can be specified multiple times if there are multiple input tensors for the model. This argument is only useful in dynamic shape case.
  -s		TensorRT strict_type_constraints flag for INT8 mode(default false).
  -t		TensorRT data type -- fp32, fp16, int8 (default fp32).
  -u		Use DLA core N for layers that support DLA(default = -1, which means no DLA core will be utilized for inference. Note that it'll always allow GPU fallback).
  -w		maximum workspace size of TensorRT engine (default 1<<30). If meet with out-of-memory issue, please increase the workspace size accordingly.

I downloaded the latest version of the tlt-converter as well.

To note, I am starting the tutorial from the bottom section: Deploying LPD and LPR using the DeepStream SDK. Which should be ok to do.

Submitting another comment, in addition to the above one @Morganh

I decided to try another route, in cloning this repo here: GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream

I am able to get the input_dimensions to load correctly. However, it appears another issue is springing up. Multiple commands/outputs here:

Successful/no issue

bryan@bryan-desktop:~/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app$ ../../../../../../../cuda10.2_trt7.1_jp4.5/tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 \
>            models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/lpr_us_onnx_b16.engine
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[WARNING] Tensor DataType is determined at build time for tensors not marked as input or output.
[INFO] Detected input dimensions from the model: (-1, 3, 48, 96)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 48, 96) for input: image_input
[INFO] Using optimization profile opt shape: (4, 3, 48, 96) for input: image_input
[INFO] Using optimization profile max shape: (16, 3, 48, 96) for input: image_input
[INFO] Detected 1 inputs and 2 output network tensors.

Successful/no issue

bryan@bryan-desktop:~/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app$ make
make[1]: Entering directory '/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/nvinfer_custom_lpr_parser'
g++ -o libnvdsinfer_custom_impl_lpr.so nvinfer_custom_lpr_parser.cpp -Wall -Werror -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I/opt/nvidia/deepstream/deepstream/sources/includes -Wl,--start-group -lnvinfer -lnvparsers -Wl,--end-group
make[1]: Leaving directory '/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/nvinfer_custom_lpr_parser'
make[1]: Entering directory '/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/deepstream-lpr-app'
g++ -c -o deepstream_lpr_app.o -fpermissive -Wall -Werror -DPLATFORM_TEGRA -I/opt/nvidia/deepstream/deepstream/sources/includes `pkg-config --cflags gstreamer-1.0` -D_GLIBCXX_USE_CXX11_ABI=1 deepstream_lpr_app.c
g++ -c -o deepstream_nvdsanalytics_meta.o -Wall -Werror -DPLATFORM_TEGRA -I/opt/nvidia/deepstream/deepstream/sources/includes `pkg-config --cflags gstreamer-1.0` -D_GLIBCXX_USE_CXX11_ABI=1 deepstream_nvdsanalytics_meta.cpp
cc -o deepstream-lpr-app deepstream_lpr_app.o deepstream_nvdsanalytics_meta.o `pkg-config --libs gstreamer-1.0` -L/opt/nvidia/deepstream/deepstream/lib/ -lnvdsgst_meta -lnvds_meta -lm -lstdc++ -Wl,-rpath,/opt/nvidia/deepstream/deepstream/lib/
make[1]: Leaving directory '/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/deepstream-lpr-app'

Unsuccessful/issue

bryan@bryan-desktop:~/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/deepstream-lpr-app$ ./deepstream-lpr-app 1 2 0 us_car_test2.mp4 us_car_test2.mp4 output.264
Request sink_0 pad from streammux
Request sink_1 pad from streammux
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Unknown or legacy key specified 'process_mode' for group [property]
Now playing: 1
Opening in BLOCKING MODE
Opening in BLOCKING MODE 
Opening in BLOCKING MODE
Opening in BLOCKING MODE 
0:00:12.783947731  8708   0x558f5fb230 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 3]: deserialized trt engine from :/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/models/LP/LPR/lpr_us_onnx_b16.engine
INFO: [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT image_input     3x48x96         min: 1x3x48x96       opt: 4x3x48x96       Max: 16x3x48x96      
1   OUTPUT kINT32 tf_op_layer_ArgMax 24              min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT tf_op_layer_Max 24              min: 0               opt: 0               Max: 0               

ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_bbox/BiasAdd
0:00:12.784171641  8708   0x558f5fb230 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1670> [UID = 3]: Could not find output layer 'output_bbox/BiasAdd' in engine
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_cov/Sigmoid
0:00:12.784218673  8708   0x558f5fb230 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1670> [UID = 3]: Could not find output layer 'output_cov/Sigmoid' in engine
0:00:12.784243465  8708   0x558f5fb230 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 3]: Use deserialized engine model: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/models/LP/LPR/lpr_us_onnx_b16.engine
0:00:12.825354703  8708   0x558f5fb230 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary-infer-engine2> [UID 3]: Load new model:lpr_config_sgie_us.txt sucessfully
0:00:12.825646583  8708   0x558f5fb230 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 2]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:01:46.893865326  8708   0x558f5fb230 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 2]: serialize cuda engine to file: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/models/LP/LPD/usa_pruned.etlt_b16_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x480x640       
1   OUTPUT kFLOAT output_bbox/BiasAdd 4x30x40         
2   OUTPUT kFLOAT output_cov/Sigmoid 1x30x40         

0:01:47.053010315  8708   0x558f5fb230 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary-infer-engine1> [UID 2]: Load new model:lpd_us_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
!! [WARNING][NvDCF] Unknown param found: minMatchingScore4Motion
[NvDCF][Warning] `minTrackingConfidenceDuringInactive` is deprecated
!! [WARNING][NvDCF] Unknown param found: matchingScoreWeight4Motion
[NvDCF] Initialized
ERROR: Deserialize engine failed because file path: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/deepstream-lpr-app/../models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine open error
0:01:49.219258844  8708   0x558f5fb230 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/deepstream-lpr-app/../models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine failed
0:01:49.219294521  8708   0x558f5fb230 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/deepstream-lpr-app/../models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine failed, try rebuild
0:01:49.219326814  8708   0x558f5fb230 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:02:36.543306401  8708   0x558f5fb230 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 1]: serialize cuda engine to file: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:02:36.577236606  8708   0x558f5fb230 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-infer-engine1> [UID 1]: Load new model:trafficamnet_config.txt sucessfully
[NvDCF] De-initialized
Running...
ERROR from element file_src_1: Resource not found.
Error details: gstfilesrc.c(533): gst_file_src_start (): /GstPipeline:pipeline/GstFileSrc:file_src_1:
No such file "us_car_test2.mp4"
Returned, stopping playback
Average fps 0.000233
Totally 0 plates are inferred
Deleting pipeline

You have generated lpr_us_onnx_b16.engine successfully. There is no error in the log.

@Morganh I hit submit too quickly, and needed to edit my above comment. Could you possibly see/review/help me with the unsuccessful point I am seeing above? It’s in building/running the application per github commands here:

I move this topic from content discussions–developer blog to TLT forum.

1 Like

Please check if us_car_test2.mp4 is available.

I should have posted the log below as I noticed that “us_car_test2.mp4” might not be available. Video I am trying to test with is within the deepstream-5.1/samples/streams (sample_qHD.mp4). It appears the logs are similar:

bryan@bryan-desktop:~/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/deepstream-lpr-app$ ./deepstream-lpr-app 1 2 0 ../../../streams/sample_qHD.mp4 ../../../streams/sample_qHd.mp4 output.264
Request sink_0 pad from streammux
Request sink_1 pad from streammux
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Unknown or legacy key specified 'process_mode' for group [property]
Now playing: 1
Opening in BLOCKING MODE
Opening in BLOCKING MODE 
Opening in BLOCKING MODE
Opening in BLOCKING MODE 
0:00:11.534274321 18445   0x5592fe2830 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 3]: deserialized trt engine from :/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/models/LP/LPR/lpr_us_onnx_b16.engine
INFO: [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT image_input     3x48x96         min: 1x3x48x96       opt: 4x3x48x96       Max: 16x3x48x96      
1   OUTPUT kINT32 tf_op_layer_ArgMax 24              min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT tf_op_layer_Max 24              min: 0               opt: 0               Max: 0               

ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_bbox/BiasAdd
0:00:11.534479118 18445   0x5592fe2830 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1670> [UID = 3]: Could not find output layer 'output_bbox/BiasAdd' in engine
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_cov/Sigmoid
0:00:11.534532088 18445   0x5592fe2830 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1670> [UID = 3]: Could not find output layer 'output_cov/Sigmoid' in engine
0:00:11.534562922 18445   0x5592fe2830 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 3]: Use deserialized engine model: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/models/LP/LPR/lpr_us_onnx_b16.engine
0:00:11.576047481 18445   0x5592fe2830 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary-infer-engine2> [UID 3]: Load new model:lpr_config_sgie_us.txt sucessfully
0:00:11.576306498 18445   0x5592fe2830 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 2]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:01:46.495432090 18445   0x5592fe2830 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 2]: serialize cuda engine to file: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/models/LP/LPD/usa_pruned.etlt_b16_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x480x640       
1   OUTPUT kFLOAT output_bbox/BiasAdd 4x30x40         
2   OUTPUT kFLOAT output_cov/Sigmoid 1x30x40         

0:01:46.743905225 18445   0x5592fe2830 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary-infer-engine1> [UID 2]: Load new model:lpd_us_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
!! [WARNING][NvDCF] Unknown param found: minMatchingScore4Motion
[NvDCF][Warning] `minTrackingConfidenceDuringInactive` is deprecated
!! [WARNING][NvDCF] Unknown param found: matchingScoreWeight4Motion
[NvDCF] Initialized
ERROR: Deserialize engine failed because file path: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/deepstream-lpr-app/../models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine open error
0:01:48.909930484 18445   0x5592fe2830 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/deepstream-lpr-app/../models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine failed
0:01:48.909990381 18445   0x5592fe2830 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/deepstream-lpr-app/../models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine failed, try rebuild
0:01:48.910035643 18445   0x5592fe2830 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:02:38.473025813 18445   0x5592fe2830 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 1]: serialize cuda engine to file: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/deepstream_lpr_app/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:02:38.647800783 18445   0x5592fe2830 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-infer-engine1> [UID 1]: Load new model:trafficamnet_config.txt sucessfully
[NvDCF] De-initialized
Running...
ERROR from element file_src_1: Resource not found.
Error details: gstfilesrc.c(533): gst_file_src_start (): /GstPipeline:pipeline/GstFileSrc:file_src_1:
No such file "../../../streams/sample_qHd.mp4"
Returned, stopping playback
Average fps 0.000233
Totally 0 plates are inferred
Deleting pipeline

Please check the mp4 file is available.

I am seeing this file in my terminal (it came with deepstream). I used autocomplete (tab) to get the file.

@Morganh Something kinda interesting. I created another clone to re-run the tlt converter to add an output.

Previous Command

./tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/lpr_us_onnx_b16.engine

New Command: adding -o output_cov/Sigmoid,output_bbox/BiasAdd

./tlt-converter -k nvidia_tlt -o output_cov/Sigmoid,output_bbox/BiasAdd -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/lpr_us_onnx_b16.engine

New unsuccessful command to build/run, but maybe right direction?

bryan@bryan-desktop:~/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/deepstream-lpr-app$ ./deepstream-lpr-app 1 2 0 ../../../../streams/sample_720p.mp4 ../../../../streams/sample_720p.mp4 output.264
Request sink_0 pad from streammux
Request sink_1 pad from streammux
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Unknown or legacy key specified 'process_mode' for group [property]
Now playing: 1
Opening in BLOCKING MODE
Opening in BLOCKING MODE 
Opening in BLOCKING MODE
Opening in BLOCKING MODE 
0:00:10.921601849 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 3]: deserialized trt engine from :/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/models/LP/LPR/lpr_us_onnx_b16.engine
INFO: [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT image_input     3x48x96         min: 1x3x48x96       opt: 4x3x48x96       Max: 16x3x48x96      
1   OUTPUT kINT32 tf_op_layer_ArgMax 24              min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT tf_op_layer_Max 24              min: 0               opt: 0               Max: 0               

ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_bbox/BiasAdd
0:00:10.921851646 19412   0x55639b70d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1670> [UID = 3]: Could not find output layer 'output_bbox/BiasAdd' in engine
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_cov/Sigmoid
0:00:10.921900970 19412   0x55639b70d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1670> [UID = 3]: Could not find output layer 'output_cov/Sigmoid' in engine
0:00:10.921961909 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 3]: Use deserialized engine model: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/models/LP/LPR/lpr_us_onnx_b16.engine
0:00:10.962857824 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary-infer-engine2> [UID 3]: Load new model:lpr_config_sgie_us.txt sucessfully
0:00:10.963561485 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 2]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:01:43.118323033 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 2]: serialize cuda engine to file: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/models/LP/LPD/usa_pruned.etlt_b16_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x480x640       
1   OUTPUT kFLOAT output_bbox/BiasAdd 4x30x40         
2   OUTPUT kFLOAT output_cov/Sigmoid 1x30x40         

0:01:43.281237925 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary-infer-engine1> [UID 2]: Load new model:lpd_us_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
!! [WARNING][NvDCF] Unknown param found: minMatchingScore4Motion
[NvDCF][Warning] `minTrackingConfidenceDuringInactive` is deprecated
!! [WARNING][NvDCF] Unknown param found: matchingScoreWeight4Motion
[NvDCF] Initialized
ERROR: Deserialize engine failed because file path: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/deepstream-lpr-app/../models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine open error
0:01:45.437559981 19412   0x55639b70d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/deepstream-lpr-app/../models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine failed
0:01:45.437606544 19412   0x55639b70d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/deepstream-lpr-app/../models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine failed, try rebuild
0:01:45.437640139 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:02:33.741014630 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 1]: serialize cuda engine to file: /home/bryan/opt/nvidia/deepstream/deepstream-5.1/samples/models/test/deepstream_lpr_app/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:02:34.035270946 19412   0x55639b70d0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-infer-engine1> [UID 1]: Load new model:trafficamnet_config.txt sucessfully
Running...
qtdemux pad video/x-h264
qtdemux pad video/x-h264
h264parser already linked. Ignoring.
h264parser already linked. Ignoring.
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 
Frame Number = 0 Vehicle Count = 0 Person Count = 0 License Plate Count = 0
Frame Number = 1 Vehicle Count = 0 Person Count = 0 License Plate Count = 0
Frame Number = 2 Vehicle Count = 0 Person Count = 0 License Plate Count = 0
Frame Number = 3 Vehicle Count = 6 Person Count = 4 License Plate Count = 0
Frame Number = 4 Vehicle Count = 6 Person Count = 4 License Plate Count = 0
Frame Number = 5 Vehicle Count = 6 Person Count = 4 License Plate Count = 0
Frame Number = 6 Vehicle Count = 6 Person Count = 4 License Plate Count = 0
Frame Number = 7 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 8 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 9 Vehicle Count = 10 Person Count = 4 License Plate Count = 0
Frame Number = 10 Vehicle Count = 10 Person Count = 4 License Plate Count = 0
Frame Number = 11 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 12 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 13 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 14 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 15 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 16 Vehicle Count = 8 Person Count = 4 License Plate Count = 0
Frame Number = 17 Vehicle Count = 10 Person Count = 4 License Plate Count = 0
open dictionary file failed.
0:02:40.895026375 19412   0x5563261d40 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Error in NvDsInferContextImpl::fillClassificationOutput() <nvdsinfer_context_impl_output_parsing.cpp:796> [UID = 3]: Failed to parse classification attributes using custom parse function
open dictionary file failed.
0:02:40.895353049 19412   0x5563261d40 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Error in NvDsInferContextImpl::fillClassificationOutput() <nvdsinfer_context_impl_output_parsing.cpp:796> [UID = 3]: Failed to parse classification attributes using custom parse function
Segmentation fault (core dumped)

Alike that in the previous log/error, I see this still: ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_bbox/BiasAdd

Please ignore " Cannot find binding of given name: output_bbox/BiasAdd"

For the latest error “open dictionary file failed”, please follow

For US car plate recognition

    cp dict_us.txt dict.txt

For Chinese car plate recognition

    cp dict_ch.txt dict.txt

It seems as though it is running correctly, now (on my revised tlt-converter command; which is still a bit odd, but it’s late here and I might be getting a bit fatigued).

Last question (I hope), which might be a bit dumb: the output of the results is only in terminal, is it possible to view the mp4 video as well?

See

    ./deepstream-lpr-app <1:US car plate model|2: Chinese car plate model> \
         <1: output as h264 file| 2:fakesink 3:display output> <0:ROI disable|1:ROI enable> \
         <input mp4 file name> ... <input mp4 file name> <output file name>

Please run with ./deepstream-lpr-app 1 1 0 …

@Morganh Thanks for all the help. Got it working with 1 3 0

I have successfully tested the example described in the github.

Being still a beginner, I would like to be able to test this example with a rtsp video stream. How can this be done? Is there any guide to be able to test this?

This sample only support MP4 files with h264 video. deepstream_lpr_app/README.txt at master · NVIDIA-AI-IOT/deepstream_lpr_app · GitHub

You can refer to deepstream-app sample code for the rtsp source part.

I realized the the LPR can only recognize the rectangular license plate. How about the square license plate with 2 rows ? Can we re-train the model with a new dateset which contains two rows character images?

Currently, LPRnet does not support two lines of license plates.

1 Like