How to add cv2 based image processing probe in the deepstream pipeline

• Hardware Platform (Jetson / GPU) nvidia geforce rtx 3060
• DeepStream Version7.1
• TensorRT Version10.3
**• NVIDIA GPU Driver Version (valid for GPU only)**560.35.03

i wanted to use image processing for dehazing the frames using cv2 . currently i am using a probe function before the nvinfer element . but this is giving me a segmentation fault error .

i am providing my dehaze function below.

def dehaze_probe(pad, info, user_data):
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")
return Gst.PadProbeReturn.OK

batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list

while l_frame is not None:
    try:
        frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
    except StopIteration:
        break

    # Get frame width and height
    frame_width = frame_meta.source_frame_width
    frame_height = frame_meta.source_frame_height

    # Get raw image data
    n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
    
    # Convert to OpenCV format
    frame = np.array(n_frame, copy=True, order='C')

    # Apply Dehazing
    dehazed_frame = dehaze_frame(frame)

    # Convert back to GstBuffer
    np.copyto(n_frame, dehazed_frame)

    try:
        l_frame = l_frame.next
    except StopIteration:
        break

return Gst.PadProbeReturn.OK

is there a better approach to retrieve the frame data from buffer and do the cv2 processing and send the frames back to the pipeline.?

1 Like

/opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-dsexample/ is an opensource source. At default, it supports do blur to the objects with Opencv. I suggest modifying this plugin to do your dehazing processing with Opencv. please refer to the following cmd.

gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 ! nvinfer  config-file-path=./dstest2_pgie_config.txt  !  nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! dsexample full-frame=FALSE blur-objects=TRUE ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvdsosd !   nvvideoconvert ! nvegltransform ! nveglglessink

Its Post Infer is there is way to perform pre processing before passing buffer to infer and how we can modify plugin and use it in python bindings.

  1. can you provide a sample in python . how to get the frame buffer and do the processing on it and send back it in a form which the downstream element accepts the buffer.
  2. i have made the entire pipeline running after the cv2 processing where i have made the cuda unification at the nvvidconv element in which i am calling the probe function for the cv2 dehazing but this turned out to be very slow and the dehazing is not done properly . as it performs for very well single images i have tested outside the pipeline.
  3. is there a color format that the entire pipeline elements accepts? what if i convert the format to RGB after streammux and send it to to the downstream elements at once?

please refer to python sample deepstream-imagedata-multistream-cupy, As the readme shown, “Access imagedata buffer from GPU in a multistream source as CuPy array Modify the images in-place. Changes made to the buffer will reflect in the downstream but color format, resolution and CuPy transpose operations are not permitted.”.