i wanted to use image processing for dehazing the frames using cv2 . currently i am using a probe function before the nvinfer element . but this is giving me a segmentation fault error .
i am providing my dehaze function below.
def dehaze_probe(pad, info, user_data):
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")
return Gst.PadProbeReturn.OK
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
try:
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
except StopIteration:
break
# Get frame width and height
frame_width = frame_meta.source_frame_width
frame_height = frame_meta.source_frame_height
# Get raw image data
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
# Convert to OpenCV format
frame = np.array(n_frame, copy=True, order='C')
# Apply Dehazing
dehazed_frame = dehaze_frame(frame)
# Convert back to GstBuffer
np.copyto(n_frame, dehazed_frame)
try:
l_frame = l_frame.next
except StopIteration:
break
return Gst.PadProbeReturn.OK
is there a better approach to retrieve the frame data from buffer and do the cv2 processing and send the frames back to the pipeline.?
/opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-dsexample/ is an opensource source. At default, it supports do blur to the objects with Opencv. I suggest modifying this plugin to do your dehazing processing with Opencv. please refer to the following cmd.
can you provide a sample in python . how to get the frame buffer and do the processing on it and send back it in a form which the downstream element accepts the buffer.
i have made the entire pipeline running after the cv2 processing where i have made the cuda unification at the nvvidconv element in which i am calling the probe function for the cv2 dehazing but this turned out to be very slow and the dehazing is not done properly . as it performs for very well single images i have tested outside the pipeline.
is there a color format that the entire pipeline elements accepts? what if i convert the format to RGB after streammux and send it to to the downstream elements at once?
please refer to python sample deepstream-imagedata-multistream-cupy, As the readme shown, “Access imagedata buffer from GPU in a multistream source as CuPy array Modify the images in-place. Changes made to the buffer will reflect in the downstream but color format, resolution and CuPy transpose operations are not permitted.”.