Getting an error while trying to use deepstream_launchpad.ipynb code

Apologies for the late reply on this issue, I have ran the deepstream_python_apps code module regarding rtsp input and rtsp output generation. This was the command I ran: python3 deepstream_test1_rtsp_in_rtsp_out.py -i rtsp://192.168.20.91:8554/stream1 -g nvinfer.

The error:

Creating Pipeline 
 
Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating Pgie 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating H264 Encoder
Creating H264 rtppay
Adding elements to Pipeline 


 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***


Starting pipeline 

Opening in BLOCKING MODE 
0:00:02.442889116 49881     0x36808550 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1174> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:05.393353970 49881     0x36808550 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/side_model.onnx_b1_gpu0_fp16.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT images          3x640x640       
1   OUTPUT kFLOAT output          25200x6         

ERROR: [TRT]: 3: Cannot find binding of given name: conv2d_bbox
0:00:05.569021461 49881     0x36808550 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1955> [UID = 1]: Could not find output layer 'conv2d_bbox' in engine
ERROR: [TRT]: 3: Cannot find binding of given name: conv2d_cov/Sigmoid
0:00:05.569068726 49881     0x36808550 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1955> [UID = 1]: Could not find output layer 'conv2d_cov/Sigmoid' in engine
0:00:05.569082167 49881     0x36808550 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/side_model.onnx_b1_gpu0_fp16.engine
0:00:05.597969505 49881     0x36808550 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Decodebin child added: source 

Error: gst-resource-error-quark: Could not open resource for reading. (5): gstrtspsrc.c(6232): gst_rtspsrc_setup_auth (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:
No supported authentication protocol was found

This is a problem with rtsp source. It seems that a username and password are required.

Okay so the issue was with my rtsp stream port which needed to be corrected. But now the pipeline is running, however it seems there is a problem with the bounding boxes.

Creating Pipeline 
 
Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating Pgie 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating H264 Encoder
Creating H264 rtppay
Adding elements to Pipeline 


 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***


Starting pipeline 

Opening in BLOCKING MODE 
0:00:00.213614285 58331     0x25d8c550 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1174> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:03.245521159 58331     0x25d8c550 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/side_model.onnx_b1_gpu0_fp16.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT images          3x640x640       
1   OUTPUT kFLOAT output          25200x6         

ERROR: [TRT]: 3: Cannot find binding of given name: conv2d_bbox
0:00:03.418736973 58331     0x25d8c550 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1955> [UID = 1]: Could not find output layer 'conv2d_bbox' in engine
ERROR: [TRT]: 3: Cannot find binding of given name: conv2d_cov/Sigmoid
0:00:03.418796591 58331     0x25d8c550 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1955> [UID = 1]: Could not find output layer 'conv2d_cov/Sigmoid' in engine
0:00:03.418813679 58331     0x25d8c550 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/side_model.onnx_b1_gpu0_fp16.engine
0:00:03.447223970 58331     0x25d8c550 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Decodebin child added: source 

Decodebin child added: decodebin0 

Decodebin child added: rtph264depay0 

Decodebin child added: h264parse0 

Decodebin child added: capsfilter0 

Decodebin child added: nvv4l2decoder0 

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0xffff959faac0 (GstCapsFeatures at 0xffff000a5100)>
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
0:00:03.732902510 58331     0x25d72aa0 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:03.732939823 58331     0x25d72aa0 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 1]: Failed to parse bboxes
Segmentation fault (core dumped)

Also using .engine model file of YoloV5 converted through yolov5_gpu_optimizations repository. The error log seems to come from this file: nvdsinfer_context_impl_output_parsing.cpp

Did you successfully run the sample?

Renamed config_infer_primary_yoloV5.txt to dstest1_pgie_config.txt, and then copied it to the deepstream-rtsp-in-rtsp-out directory, I can run it fine.

Please make sure *.engine file is generated on the same device.

Okay, so I copied the whole sample config file and renamed it, seems to be running without any code errors but for some reason I get a segmentation fault (core dumped). I even checked through the memory as well and it does not seem to get filled up.

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0xffffac4fcac0 (GstCapsFeatures at 0xffff180a5920)>
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
Segmentation fault (core dumped)

Currently running this on Orin NX with 8GB RAM.

So, which device are you running on? Do not copy .engine files, do not copy shared libraries(.so), generate them on the device where they are deployed

So I was originally running on Xavier but had to change to Orin due to some issues. But anyways, I have generated all the files on the device I am running on itself. I am not transferring from one device to another. The .so and .engine files are generated and ran on the Orin NX only.

So I was able to run the scripts after reflashing the device and doing the installation of all the dependencies again. The deepstream scripts executes perfectly. Thank you very much for your help! @junshengy

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.