The pipeline Segmentation fault

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**gpu
• DeepStream Version7.1 docker
use nvmultiurisrcbin, offen crash in end of stream

nvstreammux: Successfully handled EOS for source_id=0
Element Message from multi-uri_creator: stream-remove - stream-remove, source-id=(uint)0, sensor-id=(string)stream0, sensor-name=(string)front_door, uri=(string)file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4;
0:00:59.369365665  4395 0x7f234c000d80 ERROR          v4l2allocator gstv4l2allocator.c:1398:gst_v4l2_allocator_qbuf:<nvv4l2decoder0:pool:src:allocator> failed queueing buffer 0: Bad file descriptor
0:00:59.369402478  4395 0x7f234c000d80 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1498:gst_v4l2_buffer_pool_qbuf:<nvv4l2decoder0:pool:src> could not queue a buffer 0
0:00:59.382796974  4395 0x7f234c000d80 ERROR          v4l2allocator gstv4l2allocator.c:1398:gst_v4l2_allocator_qbuf:<nvv4l2decoder0:pool:src:allocator> failed queueing buffer 1: Bad file descriptor
0:00:59.382874952  4395 0x7f234c000d80 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1498:gst_v4l2_buffer_pool_qbuf:<nvv4l2decoder0:pool:src> could not queue a buffer 1
Warning: gst-resource-error-quark: No Sources found at the input of muxer. Waiting for sources. (3): gstnvstreammux.cpp(2893): gst_nvstreammux_src_push_loop (): /GstPipeline:pipeline0/GstDsNvMultiUriBin:multi-uri/GstBin:multi-uri_creator/GstNvStreamMux:src_bin_muxer
0:00:59.383259923  4395 0x7f234c000d80 ERROR          v4l2allocator gstv4l2allocator.c:1398:gst_v4l2_allocator_qbuf:<nvv4l2decoder0:pool:src:allocator> failed queueing buffer 2: Bad file descriptor
0:00:59.383267778  4395 0x7f234c000d80 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1498:gst_v4l2_buffer_pool_qbuf:<nvv4l2decoder0:pool:src> could not queue a buffer 2
Segmentation fault (core dumped)

my some code is

nvvidconv_tiler = Gst.ElementFactory.make("nvvideoconvert", "nvvidconv_tiler")
    if not nvvidconv_tiler:
        logger.error(f"Unable to create nvvidconv_tiler\n")

    filter_tiler = Gst.ElementFactory.make("capsfilter", "filter_tiler")
    if not filter_tiler:
        logger.error(f"Unable to create filter_tiler\n")
    filter_tiler.set_property("caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA"))

    nvvidconv_encoder = Gst.ElementFactory.make("nvvideoconvert", "nvvidconv_encoder")
    if not nvvidconv_encoder:
        logger.error(f"Unable to create nvvidconv_encoder\n")

    filter_encoder = Gst.ElementFactory.make("capsfilter", "filter_encoder")
    if not filter_encoder:
        logger.error(f"Unable to create filter_encoder\n")
    filter_encoder.set_property("caps",Gst.Caps.from_string("video/x-raw(memory:NVMM), format=I420"))

    encoder = Gst.ElementFactory.make("nvv4l2h265enc", "encoder")
    if not encoder:
        logger.error(f"Unable to create encoder\n")
    encoder.set_property('tuning-info-id', 2)
    encoder.set_property('control-rate', 2)

    rtppay = Gst.ElementFactory.make("rtph265pay", "rtppay")
    if not rtppay:
        logger.error(f"Unable to create rtppay\n")


    logger.info("Adding elements to Pipeline \n")
    pipeline.add(xxxxxxx)

    logger.info("Linking elements in the Pipeline \n")
    source.link(pgie)
    pgie.link(queue_tee)
    queue_tee.link(tee)
    tee.link(queue_msgconv)
    queue_msgconv.link(msgconv)
    msgconv.link(queue_msgbroker)
    queue_msgbroker.link(msgbroker)


    if args.output=="rtsp":
        tee.link(queue_tiler)
        queue_tiler.link(nvvidconv_tiler)
        nvvidconv_tiler.link(filter_tiler)
        filter_tiler.link(nvtiler)
        nvtiler.link(nvosd)
        nvosd.link(nvvidconv_encoder)
        nvvidconv_encoder.link(filter_encoder)
        filter_encoder.link(encoder)
        encoder.link(rtppay)
        rtppay.link(sink)

could you simplify the code the narrow down this issue? for example, is “nvmultiurisrcbin->pgie->fakesink” fine?
please refer to the pipeline which works fine.

gst-launch-1.0 nvmultiurisrcbin \
port=9000 ip-address=localhost \
batched-push-timeout=33333 max-batch-size=10 \
drop-pipeline-eos=1 live-source=1 \
uri-list=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4,file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 width=1920 height=1080 \
! nvmultistreamtiler ! nveglglessink

your code is ok

maybe some other elements errors

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!

yes, still a probelm,

could you simplify your custom code to narrow down this issue? for example,

  1. if removing msgconv branch from tee, will the app run well?
  2. if using nvsteammux + fakesink, will the app run well?

by my test

source.link(pgie)
pgie.link(queue_tee)
queue_tee.link(tee)
tee.link(queue_msgconv)
queue_msgconv.link(sink)

test with10 streams, when end of stream, the pipeline dose not core dumped

add a osd branch,
test with 10 streams
when end of stream, the pipeline will core dumped

source.link(pgie)
pgie.link(queue_tee)
queue_tee.link(tee)
tee.link(queue_msgconv)
queue_msgconv.link(sink)

tee.link(queue_tiler)
queue_tiler.link(converter)
converter.link(filter_tiler)
filter_tiler.link(nvtiler)
nvtiler.link(nvosd)
nvosd.link(tee_sink)
tee_sink.link(display_sink)

Sorry for the late reply. if having two branch, please make each single branch can work well . you can use gst-launch cmd to debug first. please refer to the pipeline in this topic.