Trouble recovering frame from buf_surface

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson ORIN NX 16GB
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1.2-b104
• TensorRT Version TRT 8.5.2.2
• NVIDIA GPU Driver Version (valid for GPU only) N/A
• Issue Type( questions, new requirements, bugs) Bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) Running the pipeline for a few hours will lead to the bug.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I’m having trouble with the bug below:
nvbufsurface: Wrong buffer index (1)
get_nvds_buf_surface: Failed to sync buffer to CPU

Those two messages keep popping up at 2 different times, once at startup but at this time it does not break/crash the pipeline, but once it appears another time the pipeline does crash, I’m using python bindings and I cannot upgrade to newer versions of deepstream either, as re-flashing the device for newer versions of jetpack is not an option at this time.

The code below is how I’m handling the buffer:

def handle_buffer_safely(self, gst_buffer, source_id):
    try:
        if gst_buffer is None:
            logger.warning("GST buffer is None, cannot proceed.")
            return False, None
        if source_id < 0 or source_id >= self.target.number_of_inputs:
            logger.error(f"source_id {source_id} out of range.")
            return False, None
        raw_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), source_id)
        if raw_frame is None:
            logger.warning(
                f"Failed to get buffer surface for source_id {source_id}."
            )
            return False, None
        frame_copy = np.array(raw_frame, copy=True, order="C")
        converted_frame = cv2.cvtColor(frame_copy, cv2.COLOR_RGBA2BGRA)
        return True, converted_frame
    except Exception as e:
        logger.error(f"Exception in buffer handling for source_id {source_id}: {e}")
        return False, None
    finally:
        try:
            pyds.unmap_nvds_buf_surface(hash(gst_buffer), source_id)
        except Exception as e:
            logger.error(f"Failed to unmap buffer for source_id {source_id}: {e}")

It is worth mentioning that this bug occurs with all sources, not at the same time but from the 3 cameras I have all 3 of them have already fallen into the wrong index and then the failure to sync, I’m taking the source id from the frame meta being returned from pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer)), so it should be correct and no logs from the function to handle the buffer are printed so they are running without problems.

If I remove the calls to get_nvds_buf_surface and unmap_nvds_buf_surface it does not crash my pipeline, but I need it to extract the frames.

So I would like to know how to solve this, or workaround it.

There seems to be no problem with your code.

Can you provide a gst-launch command line or sample so that I can reproduce the problem?

I cannot share the sample code as it is proprietary code, but I can give you the launch string so you can test it out, I also have the following dot files and log, those are using 3 input sources:
dot file pipeline from null to ready
dot file pipeline from ready to playing
log file using GST_DEBUG

The new streammux is being used already.
Launch string simplified for one input only:

gst-launch-1.0 rtspsrc location=<rtsp url> latency=3000 ! decodebin ! tee name=t t. ! queue ! nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA ! nvinfer batch-size=1 model-engine-file=<file path> config-file-path=<file path> ! queue ! nvstreammux width=1280 height=720 batch-size=1 name=muxer ! nvvideoconvert ! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 ! nvdsosd ! nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=<file path>sync=true

I cannot reproduce the problem from this pipeline.

It is worth mentioning that the following error is caused by the incorrect get_nvds_buf_surface parameter.

This is to map GPU memory to CPU memory so that numpy can access it.

I still suggest you check the source of source_id

I checked, the source_id’s are always coming in the correct value, and at the time it gives these errors the stream is available in all sources.
As you can see below it checks if the sources are indeed within the range it should:

        if source_id < 0 or source_id >= self.target.number_of_inputs:
            logger.error(f"source_id {source_id} out of range.")
            return False, None

Also the function:

pyds.get_nvds_buf_surface(hash(gst_buffer), source_id)

It does not fail the function when the error happens, it still returns true and the frame converted, if we cannot fix it, isn’t there a way to just avoid calling that function but still recover the frame?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

When nvstreammux generates a batch, there is no guarantee that the batch contains the data of all streams, especially the input is a network stream.

So you’d better use batch_id.

Or can you reproduce the problem using deepstream-imagedata-multistream?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.