Please provide complete information as applicable to your setup. • Hardware Platform (Jetson / GPU) Tesla T4 • DeepStream Version 6.2 (docker image) • TensorRT Version 8.5.2 • NVIDIA GPU Driver Version (valid for GPU only) 525.85.12 • Issue Type( questions, new requirements, bugs) Bug
Hello, I’m constructing a python DeepStream pipeline as follows:
trafficcamnet detector → lpd → lpr
My pipeline throws this error because of ldr:
Error: gst-library-error-quark: Configuration file parsing failed (5): gstnvinfer.cpp(842): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:secondary1-nvinference-engine:
Config file path: /root/sgie1_config_lpd.txt
Using lpd_yolov4-tiny_us.txt config file throws a bunch of warnings and then freezes.
0:00:02.475101991 3203897 0x1bc0440 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder_0:sink> Unable to try format: Unknown error -1
0:00:02.475114626 3203897 0x1bc0440 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder_0:sink> Could not probe minimum capture size for pixelformat H264
0:00:02.475125273 3203897 0x1bc0440 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:<nvv4l2-decoder_0:sink> Unable to try format: Unknown error -1
0:00:02.475135629 3203897 0x1bc0440 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:<nvv4l2-decoder_0:sink> Could not probe maximum capture size for pixelformat H264
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: builtin_op_importers.cpp:5245: Attribute caffeSemantics not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 125) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 207) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 208) [Shuffle]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 210) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: [TRT]: Missing scale and zero-point for tensor (Unnamed Layer* 217) [Constant]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
I also tried using the config config_infer_secondary_lpdnet.txt mentioned here: LPDNet | NVIDIA NGC
That doesn’t seem to work either. I get the same error I shared last time.
Some useful info:
root@f066f85e9395:~/ngc_assets/lprnet_vdeployable_v1.0# ls
ch_lp_characters.txt ch_lprnet_baseline18_deployable.etlt us_lp_characters.txt us_lprnet_baseline18_deployable.etlt us_lprnet_baseline18_deployable.etlt_b1_gpu0_fp16.engine
root@f066f85e9395:/opt/nvidia/deepstream/deepstream-6.2/lib# ls | grep libnvdsinfer_custom_impl_lpr.so
libnvdsinfer_custom_impl_lpr.so
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
sorry for the late reply, Is this still an DeepStream issue to support? as you know, deeptream sdk is a C lib, python will use deepstream sdk by python binding.
from the error, it failed in sgie2’s postprocessing function, can you add logs in NvDsInferParseCustomNVPlate to check which code line causes the eror.
noticing your sgie2_config_lpr.txt is almost same to lpr_config_sgie_us.yml, please check if the inference results of the front gie is correct, to be specific, can you see the car and car license plate’s bonding boxes?