Segmentation fault when extracting Jpeg image at probe

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.2*
• Issue Type( questions, new requirements, bugs) error

While attempting to extract a JPEG image within a probe, I encountered a segmentation fault

Pipeline code

def make_element( element, name):
    logger.info(f"creating element {element} ----- {name}")
    element = Gst.ElementFactory.make(element, name)
    if not element:
        logger.error(f" Unable to create {element} ----- {name}")
    return element

Gst.init(None)
pipeline = Gst.Pipeline()
logger.info("Creating Pipeline")
if not pipeline:
    logger.error("Unable to initialize Pipeline")

streammux = make_element("nvstreammux", "Stream-muxer")
pipeline.add(streammux)

for idx, uri in stream_list.items():
    logger.info(f"Creating source_bin  for uri {uri} ID {idx}")

    # Create first source bin and add to pipeline
    source_bin = create_uridecode_bin(idx, uri)
    logger.info("Creating source_bin")
    if not source_bin:
        logger.error(f"Failed to create source bin for uri {uri} ID {idx}.")
        # API hit fot not connecting
    else:
        pipeline.add(source_bin)

# queue1 = make_element("queue", "queue1")
# pipeline.add(queue1)

streammux.set_property("batched-push-timeout", 25000)
streammux.set_property("batch-size", MAX_SOURCE)
streammux.set_property("gpu_id", gpu_id)
streammux.set_property("live-source", 1)  # need to check
streammux.set_property('width', frame_width)
streammux.set_property('height', frame_height)

pgie = make_element("nvinfer", "primary-inference")
pgie.set_property('config-file-path', "model_config.txt")
pgie.set_property("gpu_id", gpu_id)
pgie.set_property("batch-size", MAX_SOURCE)
pipeline.add(pgie)



nvvidconv = make_element("nvvideoconvert", "convertor")
nvvidconv.set_property("gpu_id", gpu_id)
pipeline.add(nvvidconv)

caps = Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA")
filter = make_element("capsfilter", "filter")
filter.set_property("caps", caps)
pipeline.add(filter)

encoder = make_element("jpegenc", "encoder")
pipeline.add(encoder)

tee = make_element("tee", "tee")
# nvstreamdemux = make_element("nvstreamdemux", "nvstreamdemux")
pipeline.add(tee)

if (not is_aarch64()):
    # sink.set_property("gpu_id", gpu_id)
    mem_type = int(pyds.NVBUF_MEM_CUDA_UNIFIED)
    # streammux.set_property("nvbuf-memory-type", mem_type)
    nvvidconv.set_property("nvbuf-memory-type", mem_type)
    streammux.set_property("nvbuf-memory-type", int(pyds.NVBUF_MEM_CUDA_DEVICE))

streammux.link(pgie)
pgie.link(nvvidconv)
nvvidconv.link(encoder)
encoder.link(tee)


for ID, uri in stream_list.items():

    # creating queue
    queue = make_element("queue", f"queue-{ID}")
    queue.set_property("leaky", 2)
    pipeline.add(queue)

    sink = make_element("filesink", f"filesink-{ID}")
    sink.set_property("location", "image_%05d.jpg")
    pipeline.add(sink)

    # connect tee -> queue
    padname = "src_%u" % ID
    teesrcpad = tee.get_request_pad(padname)
    if not teesrcpad:
        logger.error("Unable to create demux src pad ")

    queuesinkpad = queue.get_static_pad("sink")
    if not queuesinkpad:
        logger.error("Unable to create queue sink pad ")
    teesrcpad.link(queuesinkpad)

    # connect  queue -> fakesink
    queue.link(sink)
    sinkpad = sink.get_static_pad("sink")
    prob_func = partial(sink_pad_buffer_probe, ID=ID)
    sinkpad.add_probe(Gst.PadProbeType.BUFFER, prob_func, 0)

probe code

def osd_sink_pad_buffer_probe( pad, info,udata, ID=0):

    gst_buffer = info.get_buffer()
    if not gst_buffer:
        logger.error("Unable to get GstBuffer ")
        return

    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    if not batch_meta:
        return Gst.PadProbeReturn.OK

    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            logger.error("debug1------------------------------->at the except ")
            continue


        source_id = frame_meta.source_id
        print("debug------------->", source_id, frame_meta.source_frame_height)
        n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
        print(type(n_frame))

        try:
            l_frame = l_frame.next
        except StopIteration:
                break

How do you run this program, whether on host or docker?

What’s your cuda and driver verison ?

Can you shared all the configration files and sample code for me to reproduc this issue ?

Could you get a detailed log use this command line ?

GST_DEBUG=3 python3 your_app.py

I am running it on docker

Driver Version: 535.113.01 CUDA Version: 12.2

after checking the documentation i realized pyds.get_nvds_buf_surface supports only RGBA Frames, if that the reason whats causing the segmentation fault? what is the alternate way to extract jpeg frames at the probe from a batch of data?

Thank you

when the number of sources > 1 (probe is not added) i am getting this error

0:02:25.952837920   555 0x7fcc2c2a4700 ERROR         nvvideoconvert gstnvvideoconvert.c:4095:gst_nvvideoconvert_transform: buffer transform failed

later i upgarded to deepstream6.3 from 6.2 and changed jpegenc to nvjpegenc, since then i am not getting the error gst_nvvideoconvert_transform: buffer transform failed and i update the probe to

def osd_sink_pad_buffer_probe( pad, info,udata, ID=0):

    gst_buffer = info.get_buffer()
    if not gst_buffer:
        logger.error("Unable to get GstBuffer ")
        return

    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    if not batch_meta:
        return Gst.PadProbeReturn.OK

    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            logger.error("debug1------------------------------->at the except ")
            continue


        source_id = frame_meta.source_id
       peg_data = gst_buffer.extract_dup(frame_meta.batch_id, gst_buffer.get_size())
        with open(f"{source_id}/captured_frame-{time.time()}.jpg", "wb") as f:
                        f.write(jpeg_data)

        try:
            l_frame = l_frame.next
        except StopIteration:
                break
  return Gst.PadProbeReturn.OK

the problem i am facing is that some of the frames are corrupted


Maybe you can try nvjpegenc.

I think jpegenc can’t access the memory of memory:NVMM.

By the way, Can you share the full pipeline ? I can’t see the full pipeline from your code.

yes, i switched to nvjpegenc` and i can execute the process, but some of the saved images are corrupted.

jpegenc.py (7.1 KB)

I have tried your code through modify deepstream_test_1.py, It works fine.

Noticed that the data after your encoder was sent to tee. Did you modify it after the tee?

I guess this is why your picture is abnormal.

after tee its just queue and fakesink on each branch
tee >> queue >> fakesink
I am not editing the data after tee. I got proper images when i used demux instead of tee. I would like to know what caused the image corruption when i used it, Is it because the process in prob is slow that caused frame drop? or all branches of the tee accessing the buffer at the same time?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

I believe that neither of these situations will cause the picture to be abnormal.

deepstream_test_enc_multisrc.py (9.8 KB)

I noticed that you input multiple pictures at the same time to form a batch.

I think it may be because nvjpegenc does not support batch input.

I can reproduce your problem using this sample

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.