Core dump in libnvdsgst_multistream plugin

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

I have an application based on Deepstream 5.0 which is based on gstreamer to process RTSP streams, it ran into a core dump like below.

# gdb python3 core.6693
Core was generated by `python3 xxx.py'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007fc68453d70e in ?? () from /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
[Current thread is 1 (Thread 0x7fc4185b6700 (LWP 21274))]
(gdb) bt
#0  0x00007fc68453d70e in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#1  0x00007fc68453f5ad in gst_buffer_pool_acquire_buffer () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#2  0x00007fc67e6c1f04 in gst_nvstreammux_chain ()
    at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_multistream.so
#3  0x00007fc68457288b in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#4  0x00007fc68457abb3 in gst_pad_push () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#5  0x00007fc684560aab in gst_proxy_pad_chain_default () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#6  0x00007fc68457288b in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#7  0x00007fc68457abb3 in gst_pad_push () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#8  0x00007fc684560aab in gst_proxy_pad_chain_default () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#9  0x00007fc68457288b in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#10 0x00007fc68457abb3 in gst_pad_push () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#11 0x00007fc684560aab in gst_proxy_pad_chain_default () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#12 0x00007fc68457288b in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#13 0x00007fc68457abb3 in gst_pad_push () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#14 0x00007fc67c6ecf1a in  () at /usr/lib/x86_64-linux-gnu/libgstvideo-1.0.so.0
#15 0x00007fc67c6f409b in gst_video_decoder_finish_frame () at /usr/lib/x86_64-linux-gnu/libgstvideo-1.0.so.0
#16 0x00007fc64396ecb6 in gst_v4l2_video_dec_loop ()
    at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libgstnvvideo4linux2.so
#17 0x00007fc6845a7269 in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#18 0x00007fc6863ffb40 in  () at /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0
#19 0x00007fc6863ff175 in  () at /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0
#20 0x00007fc687ab36db in start_thread () at /lib/x86_64-linux-gnu/libpthread.so.0
#21 0x00007fc687dec88f in clone () at /lib/x86_64-linux-gnu/libc.so.6
(gdb)

The code related to the nvstreammux object is shown below.

    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")
    pipeline.add(streammux)
    streammux.set_property('live-source', 1)
    streammux.set_property('width', 1280)
    streammux.set_property('height', 720)
    streammux.set_property('batch-size', number_sources) # the number of RTSP streams
    streammux.set_property('batched-push-timeout', 40000) # 40ms
    streammux.set_property("nvbuf-memory-type", int(pyds.NVBUF_MEM_CUDA_UNIFIED))

Can anyone please shed some light on this issue? What does this error indicate and how to avoid the core dump?

Can you show us the steps to reproduce this crash or just send us your source codes and configurations? It is hard to get any useful information with just the core dump log.

It’s not stably core dump. The issue seemed related to the RTSP source, some sources may not be that stable, so the pipeline may not get frames at each iteration, sometimes it got core dump, don’t know how to reproduce it in certain time.

Since we don’t have access to gstream plugin source code, just wondering if you have a smart guess to the possible cause to the issue from the core stack.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

We can’t guess anything without knowing what has happened.