Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU)**jetson NX
• DeepStream Version5.0.1
**• JetPack Version (valid for Jetson only)**4.4
• TensorRT Version7.1.3.1
Hi, I met this error. And I confirmed that the label file path is correct.
How should I solve this error?
Thanks!
Opening in BLOCKING MODE
Using winsys: x11
0:00:04.895485739 4539 0x31cbb860 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 2]: deserialized trt engine from :/home/nx/code/yolov3_tlt/./models/LPR/lpr_ch_onnx_b16.engine
INFO: [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT image_input 3x48x96 min: 1x3x48x96 opt: 4x3x48x96 Max: 16x3x48x96
1 OUTPUT kINT32 tf_op_layer_ArgMax 24 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT tf_op_layer_Max 24 min: 0 opt: 0 Max: 0
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_bbox/BiasAdd
0:00:04.895800298 4539 0x31cbb860 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 2]: Could not find output layer 'output_bbox/BiasAdd' in engine
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_cov/Sigmoid
0:00:04.895898249 4539 0x31cbb860 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 2]: Could not find output layer 'output_cov/Sigmoid' in engine
0:00:04.895936041 4539 0x31cbb860 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 2]: Use deserialized engine model: /home/nx/code/yolov3_tlt/./models/LPR/lpr_ch_onnx_b16.engine
0:00:04.897895968 4539 0x31cbb860 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::parseLabelsFile() <nvdsinfer_context_impl.cpp:457> [UID = 2]: Could not open labels file:/home/nx/code/yolov3_tlt/labels_ch.txt
ERROR: parse label file:/home/nx/code/yolov3_tlt/labels_ch.txt failed, nvinfer error:NVDSINFER_CONFIG_FAILED
ERROR: init post processing resource failed, nvinfer error:NVDSINFER_CONFIG_FAILED
ERROR: Infer Context failed to initialize post-processing resource, nvinfer error:NVDSINFER_CONFIG_FAILED
ERROR: Infer Context prepare postprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
0:00:04.920359035 4539 0x31cbb860 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<secondary_gie_0> error: Failed to create NvDsInferContext instance
0:00:04.920540475 4539 0x31cbb860 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<secondary_gie_0> error: Config file path: /home/nx/code/yolov3_tlt/spie_lpr_tlt_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:655>: Failed to set pipeline to PAUSED
Quitting
ERROR from secondary_gie_0: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_0:
Config file path: /home/nx/code/yolov3_tlt/spie_lpr_tlt_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
Can you post spie_lpr_tlt_config.txt and labels_ch.txt?
spie_lpr_tlt_config.txt
[property]
gpu-id=0
net-scale-factor=0.00392156862745098
# tlt-model-key=nvidia_tlt
# tlt-encoded-model=./models/LPR/ch_lprnet_baseline18_deployable.etlt
labelfile-path=labels_ch.txt
model-engine-file=./models/LPR/lpr_ch_onnx_b16.engine
batch-size=4
process-mode=2
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
#0=Detection 1=Classifier 2=Segmentation
network-type=1
num-detected-classes=3
# interval=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
parse-classifier-func-name=NvDsInferParseCustomNVPlate
custom-lib-path=./lpr_parser/libnvdsinfer_custom_impl_lpr.so
classifier-threshold=0.2
labels_ch.txt
皖
沪
津
渝
冀
晋
蒙
辽
吉
黑
苏
浙
京
闽
赣
鲁
豫
鄂
湘
粤
桂
琼
川
贵
云
藏
陕
甘
青
宁
新
警
学
A
B
C
D
E
F
G
H
J
K
L
M
N
P
Q
R
S
T
U
V
W
X
Y
Z
0
1
2
3
4
5
6
7
8
9
These two files are in the same path
Yes, I refer to this example. But I want to separate the LPR part as a secondary-gie.
[secondary-gie]
enable=1
model-engine-file=./models/LPR/lpr_ch_onnx_b16.engine
gpu-id=0
batch-size=4
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=spie_lpr_tlt_config.txt
And I met the above error
LPR is already a SGIE inthe sample, what do you mean by “separate”?
For this sample only needs a null label file. Please refer to download_ch.sh.
Your labels_ch.txt is actually the dictionary file. Please change the file name to “dict.txt” , and put it under the same folder with the app.
I want to use the yolo method for the license plate detection part and the LPR part in the example for my license plate recognition part.
The pipeline is: license plate detection → license plate recognition —> send the result to the cloud.
There are currently two questions:
- How to use the LPR part of the example correctly?
- How to send the recognition result to the cloud server?
These are my txt files,
dict.txt (200 Bytes) lp_deepstream.txt (4.0 KB) pgie_yolov3_tlt_config.txt (2.1 KB) spie_lpr_tlt_config.txt (2.9 KB)
In the lp_deepstream.txt file, if I set
[secondary-gie]
enable=0
it can run correctly and detect the license plate,
but, if I set
[secondary-gie]
enable=1
there will be the following error.
Opening in BLOCKING MODE
Using winsys: x11
0:00:04.895485739 4539 0x31cbb860 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 2]: deserialized trt engine from :/home/nx/code/yolov3_tlt/./models/LPR/lpr_ch_onnx_b16.engine
INFO: [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT image_input 3x48x96 min: 1x3x48x96 opt: 4x3x48x96 Max: 16x3x48x96
1 OUTPUT kINT32 tf_op_layer_ArgMax 24 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT tf_op_layer_Max 24 min: 0 opt: 0 Max: 0
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_bbox/BiasAdd
0:00:04.895800298 4539 0x31cbb860 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 2]: Could not find output layer ‘output_bbox/BiasAdd’ in engine
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_cov/Sigmoid
0:00:04.895898249 4539 0x31cbb860 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 2]: Could not find output layer ‘output_cov/Sigmoid’ in engine
0:00:04.895936041 4539 0x31cbb860 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 2]: Use deserialized engine model: /home/nx/code/yolov3_tlt/./models/LPR/lpr_ch_onnx_b16.engine
0:00:04.897895968 4539 0x31cbb860 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::parseLabelsFile() <nvdsinfer_context_impl.cpp:457> [UID = 2]: Could not open labels file:/home/nx/code/yolov3_tlt/dict.txt
ERROR: parse label file:/home/nx/code/yolov3_tlt/dict.txt failed, nvinfer error:NVDSINFER_CONFIG_FAILED
ERROR: init post processing resource failed, nvinfer error:NVDSINFER_CONFIG_FAILED
ERROR: Infer Context failed to initialize post-processing resource, nvinfer error:NVDSINFER_CONFIG_FAILED
ERROR: Infer Context prepare postprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
0:00:04.920359035 4539 0x31cbb860 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<secondary_gie_0> error: Failed to create NvDsInferContext instance
0:00:04.920540475 4539 0x31cbb860 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<secondary_gie_0> error: Config file path: /home/nx/code/yolov3_tlt/spie_lpr_tlt_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:655: Failed to set pipeline to PAUSED
Quitting
ERROR from secondary_gie_0: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_0:
Config file path: /home/nx/code/yolov3_tlt/spie_lpr_tlt_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
How can I modify the related txt files?
And I hope you can also answer my second question.
Thank you very much!
There are two text files for LPR config. One is a null file named “label_ch.txt”, the other is the dict.txt file I mentioned in above post.
You must config “labelfile-path=lable_ch.txt” in your spie_lpr_tlt_config.txt file but not dict.txt
dict.txt will be used by the libnvdsinfer_custom_impl_lpr.so. You don’t need to config it. Just put dict.txt in the same folder with your app.
Thank you for your reply.
I did what you said.
And put labels_ch.txt under the same folder with the lpr_ch_onnx_b16.engine model, just like the example.
But I still get this error.
0:00:04.611812775 15575 0x37b73800 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::parseLabelsFile() <nvdsinfer_context_impl.cpp:457> [UID = 2]: Could not open labels file:/home/nx/code/yolov3_tlt/./models/LPR/labels_ch.txt
I used the lp_deepstream.txt (4.0 KB) instead of the app (deepstream-lpr-app/deepstream_lpr_app.c) in the example.
Can I do this? What else do I need to modify to avoid this error?
Can you show the label file information with the following command?
ls -l /home/nx/code/yolov3_tlt/./models/LPR/labels_ch.txt
Yes, I can run original deepstream_lpr_app sample according to the steps without error.
Why not just replace the model in the original app. Or you set the correct label-file path in spie_lpr_tlt_config.txt. labelfile-path=/home/nx/code/yolov3_tlt/models/LPR/labels_ch.txt
Thank you very much for your reply!
1、The LPD model in the original app is trained based on the CCPD data set, but the detection effect is not very well. I also use the CCPD dataset, so I want to try another algorithm, such as yolov3.
2、I set the correct label-file path as you said
But I met a new error
nx@nx-desktop:~/code/yolov3_tlt$ deepstream-app -c lp_deepstream.txt
Error: Could not parse labels file path
Failed to parse group property
** ERROR: <gst_nvinfer_parse_config_file:1242>: failed
Opening in BLOCKING MODE
0:00:00.299627280 18549 0x7f3c002230 WARN nvinfer gstnvinfer.cpp:766:gst_nvinfer_start:<secondary_gie_0> error: Configuration file parsing failed
0:00:00.299733136 18549 0x7f3c002230 WARN nvinfer gstnvinfer.cpp:766:gst_nvinfer_start:<secondary_gie_0> error: Config file path: /home/nx/code/yolov3_tlt/spie_lpr_tlt_config.txt
** ERROR: main:655: Failed to set pipeline to PAUSED
Quitting
ERROR from secondary_gie_0: Configuration file parsing failed
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(766): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_0:
Config file path: /home/nx/code/yolov3_tlt/spie_lpr_tlt_config.txt
App run failed
Very kind of you! Thank you so much!
I re-modified the paths of all configuration files, and it worked correctly.