Error: gst-library-error-quark: Configuration file parsing failed (5)

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version 6.2 (docker image)
• TensorRT Version 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only) 525.85.12
• Issue Type( questions, new requirements, bugs) Bug

Hello, I’m constructing a python DeepStream pipeline as follows:
trafficcamnet detector → lpd → lpr

My pipeline throws this error because of ldr:
Error: gst-library-error-quark: Configuration file parsing failed (5): gstnvinfer.cpp(842): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:secondary1-nvinference-engine:
Config file path: /root/sgie1_config_lpd.txt

Here’s my config file:
sgie1_config_lpd.txt (1.0 KB)

Here’s my code:
ds_branch7.py (7.8 KB)

Some useful info:
root@f066f85e9395:~# id
uid=0(root) gid=0(root) groups=0(root)

root@f066f85e9395:~/ngc_assets/lpdnet_vpruned_v2.1# ls
usa_lpd_label.txt yolov4_tiny_ccpd_cal.bin yolov4_tiny_ccpd_deployable.etlt yolov4_tiny_usa_cal.bin yolov4_tiny_usa_deployable.etlt

from the logs, parsing the configuration file failed, please refer to lpd_yolov4-tiny_us.txt

Using lpd_yolov4-tiny_us.txt config file throws a bunch of warnings and then freezes.

0:00:02.475101991 3203897      0x1bc0440 WARN                    v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder_0:sink> Unable to try format: Unknown error -1
0:00:02.475114626 3203897      0x1bc0440 WARN                    v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder_0:sink> Could not probe minimum capture size for pixelformat H264
0:00:02.475125273 3203897      0x1bc0440 WARN                    v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder_0:sink> Unable to try format: Unknown error -1
0:00:02.475135629 3203897      0x1bc0440 WARN                    v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder_0:sink> Could not probe maximum capture size for pixelformat H264

WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: builtin_op_importers.cpp:5245: Attribute caffeSemantics not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 125) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 207) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 208) [Shuffle]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 210) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 217) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor

I also tried using the config config_infer_secondary_lpdnet.txt mentioned here: LPDNet | NVIDIA NGC

That doesn’t seem to work either. I get the same error I shared last time.

it will cost some time to generate engine, it did not freeze, please wait for a moment.

1 Like

Thanks. This solved the issue with lpd.

My pipeline: trafficcamnet detector → lpd → lpr
I’ve now added lpr to the pipeline and I’m facing this error:

0:11:13.538279126 3484904      0x13ee700 ERROR                nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Error in NvDsInferContextImpl::fillClassificationOutput() <nvdsinfer_context_impl_output_parsing.cpp:803> [UID = 3]: Failed to parse classification attributes using custom parse function
LLVM ERROR: out of memory
Aborted (core dumped)

Here’s my config file:
sgie2_config_lpr.txt (1.0 KB)

Here’s my updated code:
ds_branch7.py (7.8 KB)

Some useful info:
root@f066f85e9395:~/ngc_assets/lprnet_vdeployable_v1.0# ls
ch_lp_characters.txt ch_lprnet_baseline18_deployable.etlt us_lp_characters.txt us_lprnet_baseline18_deployable.etlt us_lprnet_baseline18_deployable.etlt_b1_gpu0_fp16.engine

root@f066f85e9395:/opt/nvidia/deepstream/deepstream-6.2/lib# ls | grep libnvdsinfer_custom_impl_lpr.so
libnvdsinfer_custom_impl_lpr.so

  1. please make sure you copied dict.txt, please refer to doc
  2. from the erorr, it failed in “custom parse function”, you can add log in NvDsInferParseCustomNVPlate to check the reason.

I’ve now copied the dict_us.txt as dict.txt
root@f066f85e9395:~/deepstream_lpr_app/deepstream-lpr-app# cat dict.txt
0
1
2
3
.
.
.
X
Y
Z

The same error still occurs. Please confirm if the location of dict_us.txt is correct.

I used NvDsInferParseCustomNVPlate without any modifications. I’m not sure how to debug this. Also, I’m trying to code the entire pipeline in Python.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

sorry for the late reply, Is this still an DeepStream issue to support? as you know, deeptream sdk is a C lib, python will use deepstream sdk by python binding.

  1. from the error, it failed in sgie2’s postprocessing function, can you add logs in NvDsInferParseCustomNVPlate to check which code line causes the eror.
  2. noticing your sgie2_config_lpr.txt is almost same to lpr_config_sgie_us.yml, please check if the inference results of the front gie is correct, to be specific, can you see the car and car license plate’s bonding boxes?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.