Segmentation Fault when increasing the frequency of sending messages (via msgbroker) at nvosd_sink_probe function

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) → GPU
• DeepStream Version → 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version → 7.2
• NVIDIA GPU Driver Version (valid for GPU only) → 455
• Issue Type( questions, new requirements, bugs) → questions/bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) increasing the frequency of sending messages (msgbroker) at nvosd probe function
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi all, I am having an issue sending messages every single frame using the msgbroker element in the pipeline. It gives me segmentation fault when i changed frequency → %30 (sending every 30th frame) to %1 (sending every single frame). I suspect there’s something to do with underlying buffer/memory handling. I did not encounter segmentation fault when attaching NVDS_EVENT_MSG_META to the buffer at the tracker source/sink pad.

Below is the code segment containing the line that i’ve changed

            # Ideally NVDS_EVENT_MSG_META should be attached to buffer by the
            # component implementing detection / recognition logic.
            # Here it demonstrates how to use / attach that meta data.
            if not (frame_number%1):
                # Frequency of messages to be send will be based on use case.
                # Here message is being sent for first object every Nth frames.
                # Allocating an NvDsEventMsgMeta instance and getting reference
                # to it. The underlying memory is not manged by Python so that
                # downstream plugins can access it. Otherwise the garbage collector
                # will free it when this probe exits.
                # send in bbox[top, left, width, height]

                    msg_meta=pyds.alloc_nvds_event_msg_meta()
                    msg_meta.bbox.top =  obj_meta.rect_params.top
                    msg_meta.bbox.left =  obj_meta.rect_params.left
                    msg_meta.bbox.width = obj_meta.rect_params.width
                    msg_meta.bbox.height = obj_meta.rect_params.height
                    msg_meta.frameId = frame_number
                    msg_meta.trackingId = long_to_int(obj_meta.object_id)
                    msg_meta.confidence = obj_meta.confidence
                    msg_meta = generate_event_msg_meta(msg_meta, obj_meta.class_id)
                    user_event_meta = pyds.nvds_acquire_user_meta_from_pool(batch_meta)
                    if(user_event_meta):
                        user_event_meta.user_meta_data = msg_meta
                        user_event_meta.base_meta.meta_type = pyds.NvDsMetaType.NVDS_EVENT_MSG_META
                        # Setting callbacks in the event msg meta. The bindings layer
                        # will wrap these callables in C functions. Currently only one
                        # set of callbacks is supported.
                        pyds.user_copyfunc(user_event_meta, meta_copy_func)
                        pyds.user_releasefunc(user_event_meta, meta_free_func)
                        pyds.nvds_add_user_meta_to_frame(frame_meta, user_event_meta)
                    else:
                        print("Error in attaching event meta to buffer\n")

Can you provide some logs before the segmentation fault happens?

It gives me segmentation fault when i changed frequency → %30 (sending every 30th frame) to %1 (sending every single frame).
When this happended, di you attaching NVDS_EVENT_MSG_META to the buffer at the tracker source/sink pad?

Other than tracker’s sink pad, i’m getting seg fault in all other pads like nvosd sink pad.

The segmentation fault does not appear if I attach the NVDS_EVENT_MSG_META to the buffer at the tracker sink pad. However, there’s some weird results after attaching it to the tracker sink pad:

  1. I am not able to get rid of the unique trackingId on the display. [see figure 1]
  2. I am always getting trackingId = -1 for all objects in my message body [see figure 2)

Figure 1

Figure 2

Here’s the debug log using GST_DEBUG=5.
I am attaching NVDS_EVENT_MSG_META to the buffer at NVOSD sink pad

### GST_DEBUG=5

##### portion of the debug log for First seg fault #####
2384:gst_buffer_foreach_meta: remove metadata 0x7f9ff813ef68 (NvDsMeta)
double free or corruption (fasttop)
Aborted (core dumped)

##### portion of the debug log for Second seg fault #####
0:00:03.038404929   753      0x3522450 DEBUG         GST_SCHEDULING gstpad.c:4320:gst_pad_chain_data_unchecked:<onscreendisplay:sink> calling chainfunction &gst_base_transform_chain with buffer buffer: 0x7f107006bc40, pts 0:00:00.333333330, dts 99:99:99.999999999, dur 99:99:99.999999999, size 64, offset none, offset_end none, flags 0x0
0:00:03.038431174   753      0x3522450 DEBUG          basetransform gstbasetransform.c:1990:default_submit_input_buffer:<onscreendisplay> handling buffer 0x7f107006bc40 of size 64, PTS 0:00:00.333333330 and offset NONE
Segmentation fault (core dumped)

##### portion of the debug log for Third seg fault #####
0:00:03.126906723  2009      0x2e4eed0 DEBUG          basetransform gstbasetransform.c:2129:default_generate_output:<convertor_postosd> doing non-inplace transform
0:00:03.126927359  2009      0x2e4f050 DEBUG         GST_SCHEDULING gstpad.c:4320:gst_pad_chain_data_unchecked:<onscreendisplay:sink> calling chainfunction &gst_base_transform_chain with buffer buffer: 0x7f053006dc40, pts 0:00:00.466666662, dts 99:99:99.999999999, dur 99:99:99.999999999, size 64, offset none, offset_end none, flags 0x0
Segmentation fault (core dumped)


Can you use gdb to have a check the point of crash?

sorry, I am not familiar with gdb. Can you explain how should I conduct the check with gdb?


This is my current pipeline. I managed to fix the segmentation fault by changing 2 configurations:

  1. setting udpsink.set_property(“sync”, 0) for the RTSP branch
  2. lowering the bitrate of the encoder for the filesink branch

Glad to know you fixed the crash issue.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.